What retail needs is escrowed microchannel hubs (what lightning provides,
for example), which enable untrusted instant payments. Not reliance on
single-signer zeroconf transactions that can never be made safe.
On Fri, Jun 19, 2015 at 5:47 PM, Andreas Petersson andr...@petersson.at
wrote:
I have
On Thu, Jun 18, 2015 at 6:31 AM, Mike Hearn m...@plan99.net wrote:
The first issue is how are decisions made in Bitcoin Core? I struggle to
explain this to others because I don't understand it myself. Is it a vote
of people with commit access? Is it a 100% agreement of core developers
and if
On Thu, Jun 18, 2015 at 2:58 PM, Jeff Garzik jgar...@bitpay.com wrote:
The whole point is getting out in front of the need, to prevent
significant negative impact to users when blocks are consistently full.
To do that, you need to (a) plan forward, in order to (b) set a hard fork
date in
Matt, I for one do not think that the block size limit should be raised at
this time. Matt Corallo also started the public conversation over this
issue on the mailing list by stating that he was not in favor of acting now
to raise the block size limit. I find it a reasonable position to take that
Peter it's not clear to me that your described protocol is free of miner
influence over the vote, by artificially generating transactions which they
claim in their own blocks, or conforming incentives among voters by opting
to be with the (slight) majority in order to minimize fees.
Wouldn't it
Certainly, but I would drop discussion of IsStandard or consensus rules.
On Jun 6, 2015 1:24 AM, Wladimir J. van der Laan laa...@gmail.com wrote:
On Fri, Jun 05, 2015 at 09:46:17PM -0700, Mark Friedenbach wrote:
Rusty, this doesn't play well with SIGHASH_SINGLE which is used in
assurance
Rusty, this doesn't play well with SIGHASH_SINGLE which is used in
assurance contracts among other things. Sometimes the ordering is set by
the signing logic itself...
On Jun 5, 2015 9:43 PM, Rusty Russell ru...@rustcorp.com.au wrote:
Title: Canonical Input and Output Ordering
Author: Rusty
Why is this your business or the business of anyone on this list? Take it
somewhere else.
On Thu, Jun 4, 2015 at 2:52 PM, Sven Berg svenb...@airmail.cc wrote:
1) Hours/week have you devoted to each project out of a 40hr work week
2) Upfront and ongoing fees for use of your name
3) Break
, at 12:16 AM, Mark Friedenbach m...@friedenbach.org
wrote:
You are correct! I am maintaining a 'checksequenceverify' branch in my
git
repository as well, an OP_RCLTV using sequence numbers:
https://github.com/maaku/bitcoin/tree/checksequenceverify
Most of the interesting use cases
The reference implementation is available at the following git repository:
https://github.com/maaku/bitcoin/tree/sequencenumbers
I request that the BIP editor please assign a BIP number for this work.
Sincerely,
Mark Friedenbach
On Wed, May 27, 2015 at 3:11 AM, Mike Hearn m...@plan99.net wrote:
As I believe out of all proposed protocols Satoshi's is still the most
powerful, I would suggest that any change to the semantics on nSequence be
gated by a high bit or something, so the original meaning remains available
Sequence numbers appear to have been originally intended as a mechanism for
transaction replacement within the context of multi-party transaction
construction, e.g. a micropayment channel. The idea is that a participant
can sign successive versions of a transaction, each time incrementing the
Please let's at least have some civility and decorum on this list.
On Tue, May 26, 2015 at 1:30 PM, joli...@airmail.cc wrote:
You're the Chief Scientist of __ViaCoin__ a alt with 30 second blocks
and you have big banks as clients. Shit like replace-by-fee and leading
the anti-scaling mob is
wrote:
Le 08/05/2015 22:33, Mark Friedenbach a écrit :
* For each block, the miner is allowed to select a different difficulty
(nBits) within a certain range, e.g. +/- 25% of the expected difficulty,
and this miner-selected difficulty is used for the proof of work check.
In
addition
Micropayment channels are not pie in the sky proposals. They work today on
Bitcoin as it is deployed without any changes. People just need to start
using them.
On May 10, 2015 11:03, Owen Gunden ogun...@phauna.org wrote:
On 05/08/2015 11:36 PM, Gregory Maxwell wrote:
Another related point
constrains the maximum allowed block size to be within a range supported by
fees on the network, providing an emergency relief valve that we can be
assured will only be used at significant cost.
Mark Friedenbach
* There has over time been various discussions on the bitcointalk forums
about dynamically
Transactions don't expire. But if the wallet is online, it can periodically
choose to release an already created transaction with a higher fee. This
requires replace-by-fee to be sufficiently deployed, however.
On Fri, May 8, 2015 at 1:38 PM, Raystonn . rayst...@hotmail.com wrote:
I have a
On Fri, May 8, 2015 at 3:43 PM, Aaron Voisine vois...@gmail.com wrote:
This is a clever way to tie block size to fees.
I would just like to point out though that it still fundamentally is using
hard block size limits to enforce scarcity. Transactions with below market
fees will hang in limbo
resorting to dropping transactions after a prolonged delay.
Aaron Voisine
co-founder and CEO
breadwallet.com
On Fri, May 8, 2015 at 3:45 PM, Mark Friedenbach m...@friedenbach.org
wrote:
On Fri, May 8, 2015 at 3:43 PM, Aaron Voisine vois...@gmail.com wrote:
This is a clever way to tie
The problems with that are larger than time being unreliable. It is no
longer reorg-safe as transactions can expire in the course of a reorg and
any transaction built on the now expired transaction is invalidated.
On Fri, May 8, 2015 at 1:51 PM, Raystonn rayst...@hotmail.com wrote:
Replace by
At this moment anyone can alter the txid. Assume transactions are 100%
malleable.
On Apr 16, 2015 9:13 AM, s7r s...@sky-ip.org wrote:
Hi Pieter,
Thanks for your reply. I agree. Allen has a good point in the previous
email too, so the suggestion might not fix anything and complicate things.
Thank you Jorge for the contribution of the Stag Hunt terminology. It is
much better than a politically charged scorched earth.
On Feb 21, 2015 11:10 AM, Jorge Timón jti...@jtimon.cc wrote:
I agree scorched hearth is a really bad name for the 0 conf protocol
based on game theory. I would have
There should not be a requirement at this level to ensure validity. That
would too constrain use cases of implementations of your protocol. It is
not difficult to imagine use cases where parties generate chained
transactions on top of unconfimed transactions. Although malleability
currently makes
On Sat, Aug 9, 2014 at 6:10 AM, Sergio Lerner sergioler...@certimix.com
wrote:
Hi Tim,
It's clear from the paper that the second party in the protocol can
de-anonymize the first party. So it's seems that dishonest shufflers would
prefer to be in that position in the queue.
That's not
On 08/06/2014 01:02 PM, Tom Harding wrote:
With first-eligible-height and last-eligible-height, creator could
choose a lifetime shorter than the max, and in addition, lock the whole
thing until some point in the future.
Note that this would be a massive, *massive* change that would
completely
On 08/06/2014 01:20 PM, Peter Todd wrote:
The general case doesn't require transmission of any merkle data; it
is derived from the tx data.
How can that possibly be the case? The information is hidden behind the
Merkle root in the transaction. The validator needs to know whether
there is an
Can someone explain to these guys and the public why promising to limit
yourselves to *only* a 50% chance of successfully double-spending a 6
confirm transaction is still not acceptable?
q=0.4
z=0 P=1
z=1 P=0.828861
z=2 P=0.736403
z=3 P=0.664168
z=4 P=0.603401
z=5
Sergio, why is preventing mining pools a good thing? The issue is not
mining pools, which provide a needed service in greatly reducing
variance beyond what any proposal like this can do.
The issue is centralized transaction selection policies, which is
entirely orthogonal. And the solution
Do you need to do full validation? There's an economic cost to mining
invalid blocks, and even if that were acceptable there's really no
reason to perform such an attack. The result would be similar to a block
withholding attack, but unlike block withholding it would be trivially
detectable
Not with current script, but there are mechanisms by which you can do a
digital signature where signing two pieces of information reveals the
ECDSA k parameter, thereby allowing anyone to recover the private key
and steal the coins.
Practically speaking, these are not very safe systems to use.
The correct approach here is header hash-tree commitments which enable
compact (logarithmic) SPV proofs that elide nearly all intervening
headers since the last checkpoint. You could then query the hash tree
for references to any of the headers you actually need.
See this message for details:
I know the likelihood of this happening is slim, but if these are the
desired features we should consider switching to monotone (monotone.ca)
which has a much more flexible DAG structure and workflow built around
programmable multi-sig signing of commits. We could still maintain the
github account
I don't think such a pull request would be accepted. The point was to
minimize impact to the block chain. Each extras txout adds 9 bytes
minimum, with zero benefit over serializing the data together in a
single OP_RETURN.
On 05/03/2014 11:39 AM, Peter Todd wrote:
The standard format ended up
Is it more complex? The current implementation using template matching
seems more complex than `if script.vch[0] == OP_RETURN
script.vch.size() 42`
On 05/03/2014 12:08 PM, Gregory Maxwell wrote:
On Sat, May 3, 2014 at 11:55 AM, Mark Friedenbach m...@monetize.io wrote:
I don't think
On 04/28/2014 07:32 AM, Sergio Lerner wrote:
So you agree that: you need a periodic connection to a honest node, but
during an attack you may loose that connection. This is the assumption
we should be working on, and my use case (described in
I would prefer to avoid. By
using back-links you make it have log(N) space usage.
On 04/26/2014 07:39 PM, Sergio Lerner wrote:
El 26/04/2014 10:43 p.m., Mark Friedenbach escribió:
Sergio,
First of all, let's define what an SPV proof is: it is a succinct
sequence of bits which can
I'm not convinced of the necessity of this idea in general, but if it
were to be implemented I would recommend serializing the nVersion field
as a VarInt (Pieter Wuille's multi-byte serialization format) and using
the remaining space of the 4 bytes as your extra nonce.
That would allow
On 04/27/2014 05:36 AM, Sergio Lerner wrote:
Without invoking moon math or assumptions of honest peers and
jamming-free networks, the only way to know a chain is valid is to
witness the each and every block. SPV nodes on the other hand,
simply trust that the most-work chain is a valid
I think you're misunderstanding the point. The way you get IsStandard
changed is that you make an application-oriented BIP detailing the use
of some new standard transaction type (say, generalized hash-locked
transactions for atomic swaps). We then discuss that proposal for its
technical merits
There's no need to be confrontational. I don't think anyone here objects
to the basic concept of proof-of-stake. Some people, myself included,
have proposed protocols which involve some sort of proof of stake
mechanism, and the idea itself originated as a mechanism for eliminating
checkpoints,
remember who I saw discussing this idea. Might have been Vitalik
Buterin?
On 27 April 2014 6:39:28 AM AEST, Mark Friedenbach m...@monetize.io wrote:
There's no need to be confrontational. I don't think anyone here
objects
to the basic concept of proof-of-stake. Some people, myself included
Sergio,
First of all, let's define what an SPV proof is: it is a succinct
sequence of bits which can be transmitted as part of a non-interactive
protocol that convincingly establishes for a client without access to
the block chain that for some block B, B has an ancestor A at some
specified
Testnet vs mainnet is quite a separate issue than bitcoin vs altcoin.
Unfortunately few of the alts ever figured this out.
On 04/22/2014 01:39 AM, Tamas Blummer wrote:
Extra encoding for testnet is quite useless complexity in face of many
alt chains.
BIPS should be chain agnostic.
Regards,
That wasn't what I was saying. Right now the primacy of a block is
determined by the time at which the `block` message is received, which
is delays due to both the time it takes to transmit the block data and
the time it takes to validate. Headers-first, on the other hand, has the
option of basing
As soon as we switch to headers
first - which will be soon - there will be no difference in propagation
time no matter how large the block is. Only 80 bites will be required to
propagate the block header which establishes priority for when the block is
fully validated.
On Apr 20, 2014 6:56 PM,
Not necessarily. Running a private server involves listening to the p2p
network for incoming transactions, performing validation on receipt and
organizing a mempool, performing transaction selection, and relaying
blocks to auditors - none of which is tested in a reindex.
A reindex would give you
XP is no longer receiving security patches from Microsoft, and hasn't been
for some time. There are known remote exploits that aren't going to be
fixed, ever.
On Apr 16, 2014 8:15 AM, Kevin kevinsisco61...@gmail.com wrote:
On 4/16/2014 4:14 AM, Wladimir wrote:
Hello,
Today I noticed that
On 04/16/2014 09:27 AM, Kevin wrote:
Should we then add an alert message to wallet installers such as, Such
and such will not run on windows xp?
It's not really our place to police that ... plus it's perfectly safe to
be running Bitcoin Core as a full node on XP. It's just the wallet
platform that the original
vendor (MS) no longer supports themselves.
On Apr 16, 2014, at 9:35 AM, Mark Friedenbach m...@monetize.io wrote:
On 04/16/2014 09:27 AM, Kevin wrote:
Should we then add an alert message to wallet installers such as, Such
and such will not run on windows xp?
It's
On 04/16/2014 02:29 PM, Kevin wrote:
Okay, so how about an autoupdate function which pulls a work around off
the server? Sooner or later, the vulnerabilities must be faced.
NO. Bitcoin Core will never have an auto-update functionality. That
would be a single point of failure whose compromise
You took the quote out of context:
a full node can copy the chain state from someone else, and check that
its hash matches what the block chain commits to. It's important to
note that this is a strict reduction in security: we're now trusting
that the longest chain (with most proof of work)
Checkpoints will go away, eventually.
On 04/10/2014 02:34 PM, Jesus Cea wrote:
On 10/04/14 18:59, Pieter Wuille wrote:
It's important to
note that this is a strict reduction in security: we're now trusting
that the longest chain (with most proof of work) commits to a valid
UTXO set (at some
On 04/09/2014 09:09 AM, Tamas Blummer wrote:
Yes, SPV is a sufficient API to a trusted node to build sophisticated
features not offered by the core.
SPV clients of the border router will build their own archive and
indices based on their interest of the chain therefore the
border router core
I've advocated for this in the past, and reasonable counter-arguments I
was presented with are: (1) bittorrent is horribly insecure - it would
be easy to DoS the initial block download if that were the goal, and (2)
there's a reasonable pathway to doing this all in-protocol, so there's
no reason
Flavien, capital is wealth or resources available for the stated purpose
of the company. These bitcoins represent nothing more than a speculative
floor owned by the investors, not the company.
On 04/07/2014 07:00 AM, Flavien Charlon wrote:
Jorge, they'd have to be. Otherwise, assuming the price
Right now running a full-node on my home DSL connection (1Mbps) makes
other internet activity periodically unresponsive. I think we've already
hit a point where resource requirements are pushing out casual users,
although of course we can't be certain that accounts for all lost nodes.
On
On 04/07/2014 09:57 AM, Gregory Maxwell wrote:
That is an implementation issue— mostly one that arises as an indirect
consequence of not having headers first and the parallel fetch, not a
requirements issue.
Oh, absolutely. But the question why are people not running full
nodes? has to do with
On 04/07/2014 12:00 PM, Tamas Blummer wrote:
Once a single transaction in pruned in a block, the block is no longer
eligible to be served to other nodes.
Which transactions are pruned can be rather custom e.g. even depending
on the wallet(s) of the node,
therefore I guess it is more handy to
On 04/07/2014 12:20 PM, Tamas Blummer wrote:
Validation has to be sequantial, but that step can be deferred until the
blocks before a point are loaded and continous.
And how do you find those blocks?
I have a suggestion: have nodes advertise which range of full blocks
they possess, then you
I'm afraid I'm going to be the jerk that requested more details and then
only nitpicks seemingly minor points in your introduction. But its
because I need more time to digest the contents of your proposal. Until
then:
But moving value between chains is inconvenient; right now moving
value
On 03/24/2014 01:34 PM, Troy Benjegerdes wrote:
I'm here because I want to sell corn for bitcoin, and I believe it will be
more profitable for me to do that with a bitcoin-blockchain-based system
in which I have the capability to audit the code that executes the trade.
A discussion over such a
This isn't distributed-systems-development, it is bitcoin-development.
Discussion over chain parameters is a fine thing to have among people
who are interested in that sort of thing. But not here.
On 03/23/2014 04:17 PM, Troy Benjegerdes wrote:
I find it very irresponsible for Bitcoiners to on
Please, by all means: ignore our well-reasoned arguments about
externalized storage and validation cost and alternative solutions.
Please re-discover how proof of publication doesn't require burdening
the network with silly extra data that must be transmitted, kept, and
validated from now until
Jeff, there are *plenty* of places that lack local Internet access for
one or both participants.
Obviously making the case where both participants lack access to the
bitcoin network is difficult to secure, but not impossible (e.g. use a
telephany-based system to connect to a centralized
Timon, and Mark Friedenbach. It
is believed that the first explanation of this general idea is due to
Andrew Miller in his 7 Aug 2012 forum post titled The High-Value-Hash
Highway[2].
[1]http://sourceforge.net/p/bitcoin/mailman/message/32108143/
[2]https://bitcointalk.org/index.php?topic=98986.0
This ship may have already sailed, but...
Using milli- and micro- notation for currency units is also not very
well supported. Last time this thread was active, I believe there was a
suggestion to use 1 XBT == 1 uBTC. This would bring us completely within
the realm of supported behavior in
Only if you view bitcoin as no more than a payment network.
On Mar 1, 2014 10:24 AM, Jeff Garzik jgar...@bitpay.com wrote:
This is wandering far off-topic for this mailing list.
On Sat, Mar 1, 2014 at 12:45 PM, Troy Benjegerdes ho...@hozed.org wrote:
You can make the same argument against
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Transaction fees are a DoS mitigating cost to the person making the
transaction, but they are generally not paid to the people who
actually incur costs in validating the blockchain. Actual transaction
processing costs are an externality that is
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
What follows is a proposed BIP for human-friendly base-32
serialization with error correction encoding. A formatted version is
viewable as part of a gist with related code:
https://gist.github.com/maaku/8996338#file-bip-ecc32-mediawiki
An
On 02/12/2014 08:44 AM, Alan Reiner wrote:
Changing the protocol to use these static IDs is a pretty fundamental
change that would never happen in Bitcoin. But they can still be
useful at the application level to mitigate these issues.
Not to mention that it would be potentially very
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Since you are taking the hash of Unicode data, I would strongly
recommend using a canonical form, e.g. Normalized Form C.
On 01/20/2014 09:42 AM, slush wrote:
Hi all,
during recent months we've reconsidered all comments which we
received from
On 01/18/2014 03:05 AM, Wladimir wrote:
On Sat, Jan 18, 2014 at 9:11 AM, Odinn Cyberguerrilla
ABISprotocol hat: on
regarding:
stuff not getting into blockchain in a day's time,
microdonations not facilitated as much as they could be,
Please point to your pull requests
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 01/17/2014 01:15 AM, Mike Hearn wrote:
I must say, this shed is mighty fine looking. It'd be a great place
to store our bikes. But, what colour should we paint it?
How about we split the difference and go with privacy address?
As
Too close
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
CPFP is *extremely* important. People have lost money because this
feature is missing. I think it's critical that it makes it into 0.9
If I get a low-priority donation from a blockchain.info wallet, that
money can disappear if it doesn't make it into
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 01/15/2014 04:05 PM, Jeremy Spilman wrote:
Might I propose reusable address.
Say it like it is. This is the only suggestion so far that I really like.
No amount of finger wagging got people to stop using the block chain
for data storage, but
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 01/06/2014 10:31 PM, Thomas Voegtlin wrote:
You are right. The 256-way branching follows from the fact that the
tree was implemented using a key-value database operating with byte
strings (leveldb). With this implementation constraint, a
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 01/06/2014 10:13 AM, Peter Todd wrote:
On Sun, Jan 05, 2014 at 07:43:58PM +0100, Thomas Voegtlin wrote:
I have written a Python-levelDB implementation of this UTXO
hashtree, which is currently being tested, and will be added to
Electrum
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
JSON-RPC is a huge security risk. It's perfectly reasonable that
enabling it requires some technical mumbo-jumbo.
Are there specific configuration settings that you would like to see
exposed by the GUI?
On 12/19/2013 11:49 PM, Chris Evans wrote:
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hi Jeremy, Let's give a preview of the application-oriented BIPs I
mentioned:
Stateless validation and mining involves prefixing transaction and
block messages with proofs of their UTxO state changes. These are the
operational proofs I describe in
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
(Sorry Peter, this was meant for the whole list:)
On 12/20/2013 05:17 AM, Peter Todd wrote:
I've thought about this for awhile and come to the conclusion that
UTXO commitments are a really bad idea. I myself wanted to see them
implemented about a
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 12/20/2013 11:48 AM, Gregory Maxwell wrote:
A couple very early comments— I shared some of these with you on
IRC but I thought I'd post them to make them more likely to not get
lost.
I got the inputs from IRC, but thank you for posting to the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Hello fellow bitcoin developers. Included below is the first draft of
a BIP for a new Merkle-compressed data structure. The need for this
data structure arose out of the misnamed Ultimate blockchain
compression project, but it has since been
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Transactions != blocks. There is no need for a merge block.
You are free to trade transactions off-line, so long as you are
certain the other parties are not secretly double-spending coins they
send you on the block chain.
When connection to the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 12/13/2013 09:26 AM, Mike Hearn wrote:
I'm thinking about a use case I hope will become common next year
- pastebin style hosting sites for payment requests. Like, if I as
a regular end user wish to use the payment protocol, I could just
upload
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 11/15/13 4:41 PM, Drak wrote:
For years, people had a problem with email address, instead
using email number but they got there eventually. Most people
nowadays use email address So payment address or bitcoin
address make better sense here
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 11/15/13 5:19 PM, Drak wrote:
Maybe, but again from the user's perspective they pay someone, and
they receive money - just like you do with paypal using an email
address. The technical bits in the middle dont matter to the user
and trying to
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
For this reason I'm in favor of skipping mBTC and moving straight to
uBTC. Having eight, or even five decimal places is not intuitive to
the average user. Two decimal places is becoming standard for new
national currencies, and we wouldn't be too far
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 11/14/13 2:00 PM, Alan Reiner wrote:
Just keep in mind it will be a little awkward that 54.3 uBTC is
the smallest unit that can be transferred [easily] and the standard
fees are 500 uBTC.It's not a deal breaker, it's just something
that
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 11/14/13 3:01 PM, Luke-Jr wrote:
I think we all know the problems with the term address. People
naturally compare it to postal addresses, email addresses, etc,
which operate fundamentally different. I suggest that we switch to
using invoice id
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 11/4/13 10:16 AM, Peter Todd wrote:
Again, the right way to do this is define the standard to use the
last txout so that midstate compression can be applied in the
future. We can re-use this for merge-mining and other commitments
easily by
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 11/4/13 11:38 AM, Mike Hearn wrote:
The Merkle branch doesn't get stored indefinitely though, whereas
the coinbase hash does. The data stored in the coinbase [output]
can always just be the 256-bit root hash truncated to less.
I doubt the
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Or SIGHASH of a transaction spending those coins or updating the SIN...
On 11/2/13 2:14 PM, Johnathan Corgan wrote: On 11/01/2013 10:01 PM,
bitcoingr...@gmx.com wrote:
Server provides a token for the client to sign.
Anyone else concerned about
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
If I understand the code correctly, it's not about rejecting blocks.
It's about noticing that 50% of recent blocks are declaring a version
number that is meaningless to you. Chances are, there's been a soft
fork and you should upgrade.
On 10/30/13
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
There's no reason the signing can't be done all at once. The wallet
app would create and sign three transactions, paying avg-std.D, avg,
and avg+std.D fee. It just waits to broadcast the latter two until it
has to.
On 10/25/13 5:02 AM, Andreas
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
Also somewhat related, I have been looking for some time now to
abstract out the UTXO and block databases so that a variety of
key/value stores could be used as a backend, configured by a command
line parameter. In particular, it would be interesting
Getting OT...
For a while I've wanted to combine one of these mnemonic code generators
with an NLP engine to do something like output a short story as the
passphrase, even a humorous onem with the key encoded in the story
itself (remember the gist of the story and that's sufficient to
reconstruct
cryptographic
protocols expressed as bitcoin scripts. Input from any of the resident
cryptographers would be very appreciated.
Happy hacking,
Mark Friedenbach
-BEGIN PGP SIGNATURE-
Version: GnuPG/MacGPG2 v2.0.19 (Darwin)
Comment: GPGTools - http://gpgtools.org
Comment: Using GnuPG
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On 8/18/13 8:09 PM, John Dillon wrote:
On the other hand, a tx with some txin proofs can be safely relayed by SPV
nodes, an interesting concept. Do the UTXO commitment people have
keeping proof
size small in mind?
More than a kilobyte, probably
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
John,
What you are recommending is a drastic change that the conservative
bitcoin developers probably wouldn't get behind (but let's see). However
proof-of-stake voting on protocol soft-forks has vast implications even
beyond the block size limit.
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1
On Sat, Jun 01, 2013 at 10:32:07PM -0400, Gavin wrote:
Feels like a new opcode might be better.
Eg data 100 OP_NOP1
... Where op_nop1 is redefined to be 'verify depth' ...
I would suggest the more general 'push depth onto stack'. You can then
1 - 100 of 112 matches
Mail list logo