Re: [bitcoin-dev] Taproot Activation Meeting Reminder: April 6th 19:00 UTC bitcoin/bitcoin-dev

2021-04-06 Thread Adam Back via bitcoin-dev
As I understand Andrew Chow has a patchset for height based activation
of Speedy Trial, so that it would be great if people could review that
to help increase the review-cycles.

Personally I also somewhat prefer block-height based activation, and
for myself it seems like a mild step backwards to go back to MTP, when
as far as I understand most consider height-based to be a better
defined and cleaner, more predictable solution.


On Tue, 6 Apr 2021 at 15:35, Russell O'Connor via bitcoin-dev
> I'm pretty sure that the question of "is signalling still possible by the 
> time enough miners have upgraded and are ready to start signalling?" Strongly 
> benefits from a guaranteed number of signaling periods that height based 
> activation offers.  Especially for the short activation period of Speedy 
> Trial.
> The other relevant value of giving enough time for users to upgrade is not 
> very sensitive.  It's not like 180 days is magic number that going over is 
> safe and going below is unsafe.
> That said, as Jeremy has pointed out before (maybe it was on IRC), we can 
> almost ensure a minimum of 7 retargeting periods by carefully selecting 
> signaling start and end dates to line up in the middle of expected 
> retargeting periods that we would otherwise chose with height based 
> activation. Why we would rather use MTP to fake a height based activation, I 
> will never understand. But if this is what it takes to activate taproot, that 
> is fine by me.
> The differences between height and MTP activation are too small to matter 
> that much for what is ultimately transient code.  As long as MTP activation 
> can pass code review it is okay with me.
> On Mon., Apr. 5, 2021, 06:35 Anthony Towns via bitcoin-dev, 
>  wrote:
>> On Sat, Apr 03, 2021 at 09:39:11PM -0700, Jeremy via bitcoin-dev wrote:
>> > As such, the main conversation in this agenda item is
>> > around the pros/cons of height or MTP and determining if we can reach 
>> > consensus
>> > on either approach.
>> Here's some numbers.
>> Given a desired signalling period of xxx days, where signaling begins
>> on the first retarget boundary after the starttime and ends on the last
>> retarget boundary before the endtime, this is how many retarget periods
>> you get (based on blocks since 2015-01-01):
>>  90 days: mainnet  5-7 full 2016-block retarget periods
>> 180 days: mainnet 11-14
>> 365 days: mainnet 25-27
>> 730 days: mainnet 51-55
>> (This applies to non-signalling periods like the activation/lock in delay
>> too of course. If you change it so that it ends at the first retarget
>> period after endtime, all the values just get incremented -- ie, 6-8,
>> 12-15 etc)
>> If I've got the maths right, then requiring 1814 of 2016 blocks to signal,
>> means that having 7 periods instead of 5 lets you get a 50% chance of
>> successful activation by maintaining 89.04% of hashpower over the entire
>> period instead of 89.17%, while 55 periods instead of 51 gives you a 50%
>> chance of success with 88.38% hashpower instead of 88.40% hashpower.
>> So the "repeated trials" part doesn't look like it has any significant
>> effect on mainnet.
>> If you target yy periods instead of xxx days, starting and ending on a
>> retarget boundary, you get the following stats from the last few years
>> of mainnet (again starting at 2015-01-01):
>>  1 period:  mainnet 11-17 days (range 5.2 days)
>>  7 periods: mainnet 87-103 days (range 15.4 days)
>> 13 periods: mainnet 166-185 days (range 17.9 days)
>> 27 periods: mainnet 352-377 days (range 24.4 days)
>> 54 periods: mainnet 711-747 days (range 35.0 days)
>> As far as I can see the questions that matter are:
>>  * is signalling still possible by the time enough miners have upgraded
>>and are ready to start signalling?
>>  * have nodes upgraded to enforce the new rules by the time activation
>>occurs, if it occurs?
>> But both those benefit from less real time variance, rather than less
>> variance in the numbers of signalling periods, at least in every way
>> that I can think of.
>> Corresponding numbers for testnet:
>>  90 days: testnet   5-85
>> 180 days: testnet  23-131
>> 365 days: testnet  70-224
>> 730 days: testnet 176-390
>> (A 50% chance of activating within 5 periods requires sustaining 89.18%
>> hashpower; within 85 periods, 88.26% hashpower; far smaller differences
>> with all the other ranges -- of course, presumably the only way the
>> higher block rates ever actually happen is by someone pointing an ASIC at
>> testnet, and thus controlling 100% of blocks for multiple periods anyway)
>>   1 period:  testnet 5.6minutes-26 days (range 26.5 days)
>>  13 periods: testnet 1-135 days (range 133.5 days)
>>  27 periods: testnet 13-192 days (range 178.3 days)
>>  54 periods: testnet 39-283 days (range 243.1 days)
>> 100 periods: testnet 114-476 days (range 360.9 days)
>>  (this is the value used in [0] in order to ensure 3 months'

Re: [bitcoin-dev] Yesterday's Taproot activation meeting on lockinontimeout (LOT)

2021-02-19 Thread Adam Back via bitcoin-dev
Personally I don't really have much of a view and think either
LOT=true or false is better in the context, they both seem safe given
the current context, where basically everyone is saying "are we there
yet", including pools (88.7% going out of their way to say YES  Not that pools are deciding of
anything, being service providers to miners, who can and will switch
pool fast, and miners in-turn being service providers to the market
and as the various forks showed will follow the market.

I think it's a very good idea for safety, if there is a tested and
reviewed code with an option to force LOT=true, even if the
bitcoin-core implementation ends up defaulting to LOT=false.

Part of the danger is rushed versions of things like BIP 91 to avoid a
chain split where miners left brinkmanship just a bit too late, to
avert BIP 148 forking, and BIP 91 was used to expedite activation to
avoid that. The rushed proposal, code, review, ship cycle on that was
dangerously fast - less time and eyes for review was the danger.

> would dev consensus around releasing LOT=false be considered as "developers 
> forcing their views on users"?

given there are clearly people of both views, or for now don't care
but might later, it would minimally be friendly and useful if
bitcoin-core has a LOT=true option - and that IMO goes some way to
avoid the assumptive control via defaults.

Otherwise it could be read as saying "developers on average
disapprove, but if you, the market disagree, go figure it out for
yourself" which is not a good message for being defensive and avoiding
mis-interpretation of code repositories or shipped defaults as


On Fri, 19 Feb 2021 at 11:30, ZmnSCPxj via bitcoin-dev
> Good morning list,
> > This is absolutely the case, however note that the activation method itself 
> > is consensus code which executes as a part
> > of a fork, and one which deserves as much scrutiny as anything else. While 
> > taproot is a model of how a soft-fork should
> > be designed, this doesn't imply anything about the consensus code which 
> > represents the activation thereof.
> >
> > Hence all the debate around activation - ultimately its also defining a 
> > fork, and given the politics around it, one
> > which almost certainly carries significantly more risk than Taproot.
> >
> > Note that I don't believe anyone is advocating for "try to activate, and if 
> > it fails, move on". Various people have
> > various views on how conservative and timelines for what to do at that 
> > point, but I believe most in this discussion are
> > OK with flag-day-based activation (given some level of care) if it becomes 
> > clear Taproot is supported by a vast majority
> > of Bitcoin users and is only not activating due to lagging miner upgrades.
> Okay, I am backing off this proposal to force the LOT=false/true decision on 
> users, it was not particularly serious anyway (and was more a reaction to the 
> request of Samson Mow to just release both versions, which to my mind is no 
> different from such a thing).
> Nonetheless, as a thought experiment: the main issue is that some number of 
> people run LOT=true when miners do not activate Taproot early for some reason 
> and we decide to leave LOT=false for this particular bit until it times out.
> The issue is that those people will get forked off the network at the end of 
> this particular deployment attempt.
> I suspect those people will still exist whether or not Bitcoin Core supports 
> any kind of LOT=true mode.
> ("Never again" for some people)
> How do we convince them to go run LOT=false instead of getting themselves 
> forked off?
> Or do we simply let them?
> (and how is that different from asking each user to decide on LOT=false/true 
> right now?)
> ("reasonable default"?)
> (fundamentally speaking you still have to educate the users on the 
> ramifications of accepting the default and changing it.)
> Another thought experiment: From the point of view of a user who strongly 
> supports LOT=true, would dev consensus around releasing LOT=false be 
> considered as "developers forcing their views on users"?
> Why or why not?
> Regards,
> ZmnSCPxj
> > Matt
> >
> > On 2/18/21 10:04, Keagan McClelland wrote:
> >
> > > Hi all,
> > > I think it's important for us to consider what is actually being 
> > > considered for activation here.
> > > The designation of "soft fork" is accurate but I don't think it 
> > > adequately conveys how non-intrusive a change like this
> > > is. All that taproot does (unless I'm completely missing something) is 
> > > imbue a previously undefined script version with
> > > actual semantics. In order for a chain reorg to take place it would mean 
> > > that someone would have to have a use case for
> > > that script version today. This is something I think that we can easily 
> > > check by digging through the UTXO set or
> > > history. If anyone is using that script version, 

Re: [bitcoin-dev] Is BIP32's chain code needed?

2020-10-17 Thread Adam Back via bitcoin-dev
Another advantage of random access from BIP 32 vs iterated chain is
that if there is a bit-flip or corruption, you don't destroy access to
all future addresses, but only burn one utxo.  Empirically not an
entirely theoretical issue.

I think the only thing i'd care about is bloating up the number of
characters to backup, if the codes are all derived it doesn't matter
too much.  I tend to think of 128-bits as enough given that is the
security target of ECDSA, so long as reasonable key-stretching
algorithms are used that don't interact badly with the key use, which
seems a very reasonable assumption for PBKF2 and ECDSA.

Agree the iterated hashing argument does not seem a practical concern
- eg BIP 39 uses PBKDF2 uses 2048 iterated hash invocations.  I
suppose it's strictly true that as the hash is deterministic and not a
bijection (not a permutation), there are collisions and if you iterate
enough unreachable states can be eliminated.  But because the domain
is so large as to be practically unenumerable it won't creates a brute
force short-cut


On Sat, 17 Oct 2020 at 01:35, Pieter Wuille via bitcoin-dev
> On Tuesday, September 29, 2020 10:34 AM, Leonardo Comandini via bitcoin-dev 
>  wrote:
> Hi all,
> BIP32 [1] says: "In order to prevent these from depending solely on the key
> itself, we extend both private and public keys first with an extra 256 bits of
> entropy. This extension, called the chain code...".
> My argument is that the chain code is not needed.
> To support such claim, I'll show a schematic of BIP32 operations to be 
> compared
> with an alternative proposal and discuss the differences.
> I have two main questions:
> - Is this claim false?
> - Has anyone shared this idea before?
> Hi Leonardo,
> It's been a while but I can comment on the history of how the chaincode ended 
> up being in there.
> The most direct reason is that BIP32 was inspired by Alan Reiner's Armory 
> software, which had
> a different homomorphic key derivation scheme, but included something called 
> a chaincode to
> enable multiple "chains" of keys to be derived from the same keypair. More 
> information about
> that scheme is here: 
> BIP32 made two improvements to this:
> * Allow efficient random access into the derived keys (Armory's scheme 
> required iterating the
>   derivation function to get consecutive subkeys - which is probably where 
> the name "chain"
>   in chaincode comes from)
> * Permit hierarchical derivation, by also constructing a sub-"chaincode" 
> along with every subkey.
> If I recall correctly, there was at least one argument at the time about 
> whether the chaincode was
> necessary at all. My rationale for keeping it was:
> * xpubs are not as secret as private keys, but they do demand more protection 
> than just public keys
>   (for both privacy reasons, and due to the fact that revealing an xpub + 
> child xprv is ReallyBad(tm)).
>   For that reason, it seems nice that an xpub consists of more than just a 
> public key, as revealing
>   the public key in it means the protection above remains. I don't think 
> there is anything fundamental
>   here; just a distinct encoding for xpubs and pubkeys might have 
> accomplished the same, but this
>   felt safer.
> * Repeated hashing "felt" dangerous, as it reduces entropy at every step, so 
> it'd go below 256 bits.
>   With a chaincode to maintain extra entropy this is prevented. In 
> retrospect, this is a bogus
>   argument, as it's only a relevant point for information-theoretical 
> security (which means we wouldn't
>   be able to use ECC in the first place), and even then, it's only a minimal 
> effect.
> So in short, from a cryptographic point of view, I think that indeed, the 
> chaincode is not needed. It
> probably has some qualitative advantage in practice, but not very much.
> Cheers,
> --
> Pieter
> ___
> bitcoin-dev mailing list
bitcoin-dev mailing list

Re: [bitcoin-dev] Deterministic Entropy From BIP32 Keychains

2020-04-06 Thread Adam Back via bitcoin-dev
I looked at it and consider the crypto choices reasonable and reusing
existing bitcoin dependencies in library crypto building blocks

For myself i think the use-case of having an offline seed manager that
can be backed up once, and support multiple wallets, including ones
created after the backup improves a practical and under-addressed
problem for many users and businesses.

The fact that the interface between an offline seed manager and a
hardware or software wallet can be a bip39 mnemonic seed is convenient
and an improvement over using custom derivation paths for practical
use given the complexity of custom paths and variable support for them
in wallets.


On Mon, 6 Apr 2020 at 20:43, Rodolfo Novak via bitcoin-dev
> Hello,
> We are planning on implementing the [Deterministic Entropy From BIP32 
> Keychains](
>  BIP on Coldcard.
> Is there a BIP number planned to be assigned and is there any review of this 
> BIP yet?
> Regards,
> ℝ.
> Rodolfo Novak  ||  Coinkite Inc.  ||  GPG: B444CDDA
> ___
> bitcoin-dev mailing list
bitcoin-dev mailing list

Re: [bitcoin-dev] RFC: Deterministic Entropy From BIP32 Keychains

2020-03-25 Thread Adam Back via bitcoin-dev
I think the point is to use this proposed extension/standard for a
kind of "seed management" function, set it up on an offline device (an
always offline laptop, or a modified hardware wallet) where you put
the master seed.  And then you use this as a kind of seed manager and
transcript the seeds for different sub-wallets into the respective
wallets as BIP39 mnemonics.

So I do not think it needs any changes from existing wallet authors,
as the interaction point is a bip 39 seed, which they mostly know how
to use.  Indeed if you were to modify an existing wallet to accept the
master seed from seed management scheme and derive the seed it needs
on each wallet, then that would create a weakest link in the chain
risk - if that wallet was insecure, or compromised then all other
derived seeds would be also and you want independence for each wallet.

I do think that this use case is a practical problem for people
managing multiple seeds for various wallets in a bitcoin business
setting or even power users - you lose the single backup design,
because it's too cumbersome to create fresh backups for each of
multiple wallets, especially distributed , fireproof cryptosteel etc
backups and so in practice for multi wallet scenarios probably they
are not all full backed up or not backed up as robustly (not as
geo-redundant, fireproof, secret-shared etc).


On Tue, 24 Mar 2020 at 09:32, Tim Ruffing via bitcoin-dev
> I think your proposal is simply to use BIP32 for all derivations and
> the observation that you can work with derived keys with the
> corresponding suffixes of the path. I believe that this is a good idea.
> But I don't think that simply writing a standard will help. It's just
> one step. If all your wallets support incompatible formats, we should
> work on fixing this because that's the root of the issue. Otherwise you
> end up converting keys back and forth manually (as Chris pointed out),
> and this can't be the goal.
> But then you need to reach out to wallet devs explicitly and get them
> involved in creating the standard. Otherwise they won't use it. That's
> a hard process, and it's even harder to make sure that the resulting
> proposal isn't way too complex because everyone brings their special
> case to the table.
> Tim
> On Sun, 2020-03-22 at 11:58 +, Ethan Kosakovsky via bitcoin-dev
> wrote:
> > I have completely revised the wording of this proposal I hope to be
> > clearer in explaining the motivation and methodology.
> >
> >
> >
> > Ethan
> >
> > ‐‐‐ Original Message ‐‐‐
> > On Friday, March 20, 2020 4:44 PM, Ethan Kosakovsky via bitcoin-dev <
> >> wrote:
> >
> > > I would like to present a proposal for discussion and peer review.
> > > It aims to solve the problem of "too many seeds and too many
> > > backups" due to the many reasons stipulated in the proposal text.
> > >
> > >
> > >
> > > 
> > > BIP:
> > > Title: Deterministic Entropy From BIP32 Keychains
> > > Author: Ethan Kosakovsky
> > > Comments-Summary: No comments yet.
> > > Comments-URI:
> > > Status: Proposed
> > > Type: Standards Track
> > > Created: 2020-03-20
> > > License: BSD-2-Clause
> > > OPL
> > > 
> > >
> > > ==Abstract==
> > >
> > > This proposal provides a way to derive entropy from a HD keychain
> > > path in order to deterministically derive the initial entropy used
> > > to create keychain mnemonics and seeds.
> > >
> > > ==Motivation==
> > >
> > > BIP32 uses some initial entropy as a seed to deterministically
> > > derive a BIP32 root for hierarchical deterministic keychains. BIP39
> > > introduced a method of encoding initial entropy into a mnemonic
> > > phrase which is used as input to a one way hash function in order
> > > to deterministically derive a BIP32 seed. The motivation behind
> > > mnemonic phrases was to make it easier for humans to backup and
> > > store offline. There are also other variations of this theme.
> > >
> > > The initial motivation of BIP32 was to make handling of large
> > > numbers of private keys easier to manage and backup, since you only
> > > need one BIP32 seed to cover all possible keys in the keychain. In
> > > practice however, due to various wallet implementations and
> > > security models, the average user may be faced with the need to
> > > handle an ever growing number of seeds/mnemonics. This is due to
> > > incompatible wallet standards, hardware wallets (HWW), seed formats
> > > and standards, as well as, the need to used a mix of hot and cold
> > > wallets depending on the application and environment.
> > >
> > > Examples would span wallets on mobile phones, online servers
> > > running protocols like Join Market or Lightning, and the 

Re: [bitcoin-dev] Proof-of-Stake Bitcoin Sidechains

2019-01-22 Thread Dr Adam Back via bitcoin-dev
Brands credentials use this single show, and multiple show
credentials. It's based on the representation problem which is the
generalisation to multiple bases where Schnorr is one base, Pedersen
Commitments are two bases, Representation problem is n>2 bases.

The method used would work for Schnorr or DSA and there was some 2013
era #bitcoin-wizards discussion on this topic, where if you spend
twice miners can take your money, as a strong way to "discourage"
address reuse.  One side effect though is you force ACID log oriented
storage on the wallet, and many wallets are low power devices or even
a few in VMs that could be snapshotted or rolled back. Similar risk
model to the lightning penalty for accidentally doing a hostile close
in the current model (where ELTOO has non-penalty based close).

You would have to be careful to not use related nonces (k=nonce
committed to by R=kG), as Schnorr and DSA are highly vulnerable to
that, like simultaneous equation two samples solvable.

What the Brands n-show credential looks like is a precommitment like
single show the address becomes A=H(R,Q) where Q is the public key,
and n-show becomes A=H(R1,...,Rn,Q).

Signing becomes providing i,Ri,Q in the Script to satisfy a
ScriptPubKey that includes the three. You would need to in practice
store the Ri values in a merkle tree probably so that you don't need
to provide n inputs, but log(n) or some other structuring.

Anyway main point being the fragility to related nonces, and cost of
ACID log structured storage levels of reliability in wallets.


On Tue, 22 Jan 2019 at 15:14, ZmnSCPxj via bitcoin-dev
> Good Morning Matt,
> > ### ZmnSCPxj,
> >
> > I'm intrigued by this mechanism of using fixed R values to prevent multiple 
> > signatures, but how do we derive the R values in a way where they are
> unique for each blockheight but still can be used to create signatures or 
> verify?
> One possibility is to derive `R` using standard hierarchical derivation.
> Then require that the staking pubkey be revealed to the sidechain network as 
> actually being `staking_pubkey = P + hash(P || parent_R) * G` (possibly with 
> some trivial protection against Taproot).
> To sign for a blockheight `h`, you must use your public key `P` and the 
> specific `R` we get from hierarchical derivation from `parent_R` and the 
> blockheight as index.
> Regards,
> ZmnSCPxj
> ___
> bitcoin-dev mailing list
bitcoin-dev mailing list

Re: [bitcoin-dev] Multiparty signatures

2018-07-11 Thread Adam Back via bitcoin-dev
On Wed, Jul 11, 2018, 02:42 Erik Aronesty via bitcoin-dev <> wrote:
> Basically you're just replacing addition with interpolation everywhere in
the musig construction

Yes, but you can't do that without a delinearization mechanism to prevent
adaptive public key choice being used to break the scheme using Wagner's
attack. It is not specific to addition, it is a generalized birthday attack.

Look at the delinearization mechanism for an intuition, all public keys are
hashed along with per value hash, so that pre-commits and forces the public
keys to be non-adaptively chosen.

Adaptively chosen public keys are dangerous and simple to exploit for
example pub keys A+B, add party C' he chooses C=C'-A-B, now we can sign for
A+B+C using adaptively chose public key C.

Btw Wagner also breaks this earlier delinearization scheme

bitcoin-dev mailing list

Re: [bitcoin-dev] Simple lock/unlock mechanism

2018-02-28 Thread Adam Back via bitcoin-dev
Coincidentally I had thought of something similar to what Kalle posted
about a kind of software only time-lock vault, and described the idea
to a few people off-list.  Re. Root incompatibility, if the key is
deleted (as it must be) then a delegated signature can not be made
that bypasses the CSV timeout restriction, so Root should not be
incompatible with this.  I think it would be disadvantageous to mark
keys as Rootable vs not in a sighash sense, because then that is
another privacy/fungibility loss eroding  the uniformity advantage of
Root when the delegate is not used.

One drawback is deleting keys may itself be a bit difficult to assure
with HD wallet seeds setup-time backup model.

As Anthony described I think, a simpler though less robust model would
be to have a third party refuse to co-sign until a pre-arranged time,
and this would have the advantage of not requiring two on-chain

With bulletproofs and CT rangeproofs / general ECDL ZKPS there is the
possibility to prove things about the private key, or hidden
attributes of a public key in zero-knowledge.  Kind of what we want is
to place private key covenants, where we have to prove that they are
met without disclosing them.  For example there is a hidden CSV and it
is met OR there is no hidden CSV so it is not applicable.


On 28 February 2018 at 23:30, Anthony Towns via bitcoin-dev
> On Wed, Feb 28, 2018 at 04:34:18AM +, アルム カールヨハン via bitcoin-dev wrote:
>> 1. Graftroot probably breaks this (someone could just sign the
>> time-locked output with a script that has no time-lock).
> Making the graftroot key be a 2-of-2 muSig with an independent third party
> that commits to only signing CLTV scripts could avoid this. Making it
> 3-of-3 or 5-of-5 could be even better if you can find multiple independent
> services that will do it.
> Cheers,
> aj
> ___
> bitcoin-dev mailing list
bitcoin-dev mailing list

Re: [bitcoin-dev] Satoshilabs secret shared private key scheme

2018-01-23 Thread Adam Back via bitcoin-dev
Makwa sites [1]

Seems like they independently rediscovered it.


On 23 January 2018 at 05:54, Ondřej Vejpustek via bitcoin-dev
>> Yes, this scheme.
> In addition to the scheme, I found out, that Makwa
> (, a hashing function which received a
> special recognition in the Password Hashing Competition, supports a
> delegation. In fact, Makwa is similar to the suggested scheme.
> Unfortunately, both schemes have two drawbacks:
>   (1) There is no proof that the host computes what he's suppose to do.
>   (2) The delegation is far more slower than the normal computation.
> According to the Makwa paper
> ( the delegation is
> typically 100 to 1000 slower. So I see little advantage in delegating.
> I doubt there is a scheme that suits our needs.
> ___
> bitcoin-dev mailing list
bitcoin-dev mailing list

Re: [bitcoin-dev] hypothetical: Could soft-forks be prevented?

2017-09-15 Thread Adam Back via bitcoin-dev
True however in principle a soft-fork can also be soft-forked out. Eg say a
publicly known soft-fork done by miners only that user node software did
not upgrade for first by opt-in adoption. If there was consensus against by
users and ecosystem a node/user flag day soft fork could block it's
effects. Or if a soft fork was determined to have a major bug.

However most types of soft fork are opt-in and so mostly that situation
seems unlikely.  A censorship soft-fork is harder, that's a standard
hard-fork to bypass with current fungibility mechanisms.


On Sep 15, 2017 08:12, "ZmnSCPxj via bitcoin-dev" <> wrote:

> Good morning Dan,
> My understanding is that it is impossible for soft forks to be prevented.
> 1.  Anyone-can-spend
> There are a very large number of anyone-can-spend scripts, and it would be
> very impractical to ban them all.
> For example, the below output script is anyone-can-spend
> So is the below:
> Or:
> Or:
> Or:
> And so on.
> So no, it is not practically possible to ban anyone-can-spend outputs, as
> there are too many potential scriptPubKey that anyone can spend.
> It is even possible to have an output that requires a proof-of-work, like
> so:
> All the above outputs are disallowed from propagation by IsStandard, but a
> miner can put them validly in a block, and IsStandard is not consensus code
> and can be modified.
> 2.  Soft fork = restrict
> It is possible (although unlikely) for a majority of miners to run soft
> forking code which the rest of us are not privy to.
> For example, for all we know, miners are already blacklisting spends on
> Satoshi's coins.  We would not be able to detect this at all, since no
> transaction that spends Satoshi's coins have been broadcast, ever.  It is
> thus indistinguishable from a world where Satoshi lost his private keys.
> Of course, the world where Satoshi never spent his coins and miners are
> blacklisting Satoshi's coins, is more complex than the world where Satoshi
> never spent his coins, so it is more likely that miners are not
> blacklisting.
> But the principle is there.  We may already be in a softfork whose rules
> we do not know, and it just so happens that all our transactions today do
> not violate those rules.  It is impossible for us to know this, but it is
> very unlikely.
> Soft forks apply further restrictions on Bitcoin.  Hard forks do not.
> Thus, if everyone else is entering a soft fork and we are oblivious, we do
> not even know about it.  Whereas, if everyone else is entering a hard fork,
> we will immediately see (and reject) invalid transactions and blocks.
> Thus the only way to prevent soft fork is to hard fork against the new
> soft fork, like Bcash did.
> Regards,
> ZmnSCPxj
>  Original Message 
> Subject: [bitcoin-dev] hypothetical: Could soft-forks be prevented?
> Local Time: September 13, 2017 5:50 PM
> UTC Time: September 13, 2017 9:50 AM
> From:
> To: Bitcoin Protocol Discussion 
> Hi, I am interested in the possibility of a cryptocurrency software
> (future bitcoin or a future altcoin) that strives to have immutable
> consensus rules.
> The goal of such a cryptocurrency would not be to have the latest and
> greatest tech, but rather to be a long-term store of value and to offer
> investors great certainty and predictability... something that markets
> tend to like. And of course, zero consensus rule changes also means
> less chance of new bugs and attack surface remains the same, which is
> good for security.
> Of course, hard-forks are always possible. But that is a clear split
> and something that people must opt into. Each party has to make a
> choice, and inertia is on the side of the status quo. Whereas
> soft-forks sort of drag people along with them, even those who oppose
> the changes and never upgrade. In my view, that is problematic,
> especially for a coin with permanent consensus rule immutability as a
> goal/ethic.
> As I understand it, bitcoin soft-forks always rely on anyone-can-spend
> transactions. If those were removed, would it effectively prevent
> soft-forks, or are there other possible mechanisms? How important are
> any-one-can spend tx for other uses?
> More generally, do you think it is possible to programmatically
> avoid/ban soft-forks, and if so, how would you go about it?
> ___
> bitcoin-dev mailing list
> ___
> bitcoin-dev mailing list

Re: [bitcoin-dev] SF proposal: prohibit unspendable outputs with amount=0

2017-09-06 Thread Adam Back via bitcoin-dev
The pattern used by Felix Weiss' BIP for Confidential Transactions
depends on or is tidier with 0-value outputs.


On 7 September 2017 at 00:54, CryptAxe via bitcoin-dev
> As long as an unspendable outputs (OP_RETURN outputs for example) with
> amount=0 are still allowed I don't see it being an issue for anything.
> On Sep 5, 2017 2:52 PM, "Jorge Timón via bitcoin-dev"
>  wrote:
>> This is not a priority, not very important either.
>> Right now it is possible to create 0-value outputs that are spendable
>> and thus stay in the utxo (potentially forever). Requiring at least 1
>> satoshi per output doesn't really do much against a spam attack to the
>> utxo, but I think it would be slightly better than the current
>> situation.
>> Is there any reason or use case to keep allowing spendable outputs
>> with null amounts in them?
>> If not, I'm happy to create a BIP with its code, this should be simple.
>> ___
>> bitcoin-dev mailing list
> ___
> bitcoin-dev mailing list
bitcoin-dev mailing list

Re: [bitcoin-dev] Updating the Scaling Roadmap

2017-07-11 Thread Adam Back via bitcoin-dev
Separate from scale, there is utility to a hard-fork to fix wish-list
bugs that cant be reasonably fixed via soft-fork.  The spoonnet
proposal fixes a good number of interesting bugs.  Spoonnet and
several other HF research proposals can be found here  Part of the research on HF
is about safe deployment methods which is obviously the other main
consideration.  It seems to me likely that if the HF were to focus on
bug fixes, and not mix in new tradeoffs of security vs scale, it would
more easily reach consensus.


On 11 July 2017 at 17:03, Chris Stewart via bitcoin-dev
> Concept ACK.
> I think you are overstating the readiness of drivechains though. I think the
> optimistic estimate for drivechains to be ready for bitcoin core is a year
> out from today. More likely the date should be early 2018. Still a lot of
> work to be done! :-)
> Also I don't know if I would put a hard fork suggestion in the scaling map.
> If drivechains are successful they should be viewed as the way we scale --
> not hard forking the protocol. Do you still have capacity concerns if
> drivechains are successful?
> -Chris
> On Mon, Jul 10, 2017 at 11:50 AM, Paul Sztorc via bitcoin-dev
>  wrote:
>> Summary
>> =
>> In my opinion, Greg Maxwell's scaling roadmap [1] succeeded in a few
>> crucial ways. One success was that it synchronized the entire Bitcoin
>> community, helping to bring finality to the (endless) conversations of
>> that time, and get everyone back to work. However, I feel that the Dec
>> 7, 2015 roadmap is simply too old to serve this function any longer. We
>> should revise it: remove what has been accomplished, introduce new
>> innovations and approaches, and update deadlines and projections.
>> Why We Should Update the Roadmap
>> =
>> In a P2P system like Bitcoin, we lack authoritative info-sources (for
>> example, a "textbook" or academic journal), and as a result
>> conversations tend to have a problematic lack of progress. They do not
>> "accumulate", as everyone must start over. Ironically, the scaling
>> conversation _itself_ has a fatal O(n^2) scaling problem.
>> The roadmap helped solve these problems by being constant in size, and
>> subjecting itself to publication, endorsement, criticism, and so forth.
>> Despite the (unavoidable) nuance and complexity of each individual
>> opinion, it was at least globally known that X participants endorsed Y
>> set of claims.
>> Unfortunately, the Dec 2015 roadmap is now 19 months old -- it is quite
>> obsolete and replacing it is long overdue. For example, it highlights
>> older items (CSV, compact blocks, versionbits) as being _future_
>> improvements, and makes no mention of new high-likelihood improvements
>> (Schnorr) or mis-emphasizes them (LN). It even contains mistakes (SegWit
>> fraud proofs). To read the old roadmap properly, one must already be a
>> technical expert. For me, this defeats the entire point of having one in
>> the first place.
>> A new roadmap would be worth your attention, even if you didn't sign it,
>> because a refusal to sign would still be informative (and, therefore,
>> helpful)!
>> So, with that in mind, let me present a first draft. Obviously, I am
>> strongly open to edits and feedback, because I have no way of knowing
>> everyone's opinions. I admit that I am partially campaigning for my
>> Drivechain project, and also for this "scalability"/"capacity"
>> distinction...that's because I believe in both and think they are
>> helpful. But please feel free to suggest edits.
>> I emphasized concrete numbers, and concrete dates.
>> And I did NOT necessarily write it from my own point of view, I tried
>> earnestly to capture a (useful) community view. So, let me know how I did.
>>   Beginning of New ("July 2017") Roadmap Draft 
>> This document updates the previous roadmap [1] of Dec 2015. The older
>> statement endorsed a belief that "the community is ready to deliver on
>> its shared vision that addresses the needs of the system while upholding
>> its values".
>> That belief has not changed, but the shared vision has certainly grown
>> sharper over the last 18 months. Below is a list of technologies which
>> either increase Bitcoin's maximum tps rate ("capacity"), or which make
>> it easier to process a higher volume of transactions ("scalability").
>> First, over the past 18 months, the technical community has completed a
>> number of items [2] on the Dec 2015 roadmap. VersonBits (BIP 9) enables
>> Bitcoin to handle multiple soft fork upgrades at once. Compact Blocks
>> (BIP 152) allows for much faster block propagation, as does the FIBRE
>> Network [3]. Check Sequence Verify (BIP 112) allows trading partners to
>> mutually update an active transaction without writing it to the
>> blockchain (this helps to 

Re: [bitcoin-dev] BIP: OP_BRIBVERIFY - the op code needed for Blind Merge Mined drivechains

2017-06-27 Thread Adam Back via bitcoin-dev
On 27 June 2017 at 22:20, Luke Dashjr via bitcoin-dev
> On Wednesday 28 June 2017 12:37:13 AM Chris Stewart via bitcoin-dev wrote:
>> BRIBEVERIFY redefines the existing NOP4 opcode. When executed, if the given
>> critical hash is included at the given vout index in the coinbase
>> transaction the script evaluates to true. Otherwise, the script will fail.
>> This allows sidechains to be merged mined against
>> bitcoin without burdening bitcoin miners with extra resource requirements.
> I don't see how. It seems like the logical outcome from this is "whoever pays
> the most gets the next sidechain block"... That's not particularly useful for
> merge mining.

Maybe that's phrased badly but the point of the "blind merge mining"
is just that the sidechain fees are paid in main chain bitcoin (rather
than in sidechain bitcoin).

That means that a miner who solo mines the main chain could still mine
the sidechain by requesting a block-proposal from a trusted sidechain
fullnode.  The sidechain fullnode would actually pay the mainchain
fee, and pay itself the sidechain fees as part of the side-chain

This was viewed as less centralising than forcing miners to directly
process sidechain blocks, which could in principle be bandwidth and
CPU expensive to process, construct and validate.

bitcoin-dev mailing list

Re: [bitcoin-dev] BIP Proposal: Compact Client Side Filtering for Light Clients

2017-06-20 Thread Adam Back via bitcoin-dev
Also Jonas Nick gave a fairly comprehensive presentation on privacy
leaks in bitcoin protocol including SPV and bloom query problem


On 20 June 2017 at 14:08, bfd--- via bitcoin-dev
> On 2017-06-20 12:52, Tom Zander via bitcoin-dev wrote:
>> Second, stating that a bloom filter is a "total loss of privacy" is
>> equally
>> baseless and doesn’t need debunking.
> "On the Privacy Provisions of Bloom Filters in Lightweight Bitcoin Clients"
>> We show analytically and empirically that the reliance on Bloom filters
>> within existing SPV clients leaks considerable information about the
>> addresses of Bitcoin users. Our results show that an SPV client who uses a
>> modest number of Bitcoin addresses (e.g., < 20) risks revealing almost all
>> of his addresses.
> ___
> bitcoin-dev mailing list
bitcoin-dev mailing list

Re: [bitcoin-dev] Deploying CT in Bitcoin without extension blocks?

2017-04-12 Thread Adam Back via bitcoin-dev
See this soft-fork proposal from Felix Weiss


On Apr 12, 2017 5:43 PM, "Oleg Andreev via bitcoin-dev" <> wrote:

> (This is a sketch, not a fully-formed proposal, just to kick off the
> discussion.)
> Confidential Transactions (by GMaxwell & Poelstra) require a new
> accounting model,
> new representation of numbers (EC points as Pedersen commitments) and
> range proofs
> per number. Setting aside performance and bandwidth concerns (3-4Kb per
> output,
> 50x more signature checks), how would we deploy that feature on Bitcoin
> network
> in the most compatible manner?
> I'll try to present a sketch of the proposal. I apologize if this
> discussion already
> happened somewhere, although I couldn't find anything on this subject,
> apart from Elements
> sidechain proposal, of course.
> At first glance we could create a new extblock and transaction format, add
> a protocol to
> "convert" money into and from such extblock, and commit to that extblock
> from the
> outer block's coinbase transaction. Unfortunately, this opens gates to a
> flood of
> debates such as what should be the block size limit in such block, should
> we
> take opportunity to fix over 9000 of pet-peeve issues with existing
> transactions
> and blocks, should we adjust inflation schedule, insert additional PoW,
> what would
> Satoshi say etc. Federated sidechain suffers from the same issues, plus
> adds
> concerns regarding governance, although it would be more decoupled, which
> is useful.
> I tried to look at a possibility to make the change as compatible as
> possible,
> sticking confidential values right into the existing transaction structure
> and
> see how that would look like. As a nice bonus, confidential transactions
> would have
> to fit into the hard-coded 1 Mb limit, preserving the drama around it :-P
> We start with a segwit-enabled script versioning and introduce 2 new
> script versions:
> version A has an actual program concatenated with the commitment, while
> version B
> has only the commitment and allows mimblewimble usage (no signatures,
> non-interactive
> cut-through etc). Legacy cleartext amount can nicely act as "min value" to
> minimize
> the range proof size, and range proofs themselves are provided separately
> in the
> segregated witness payload.
> Then, we soft fork additional rules:
> 1. In non-coinbase tx, sum of commitments on inputs must balance with sum
> of commitments
>on the outputs plus the cleartext mining fee in the witness.
> 2. Range proof can be confidential, based on borromean ring signature.
> 3. Range proof can be non-confidential, consisting of an amount and raw
> blinding factor.
> 4. Tx witness can have an excess value (cf. MW) and cleartext amount for a
> miner's fee.
> 5. In coinbase tx, total plaintext reward + commitments must balance with
> subsidy,
>legacy fees and new fees in the witness.
> 6. Extra fees in the witness must be signed with the excess value's key.
> The confidential transactions use the same UTXO set, can be co-authored
> with plaintext inputs/outputs
> using legacy software and maybe even improve scalability by compressing
> on-chain transactions
> using mimblewimble cut-through.
> The rules above could have been made more complicated with export/import
> logic to allow users
> converting their coins to and from confidential ones, but that would
> require
> more complex support from miners to respect and merge outputs representing
> "plaintext value bank",
> mutate export transactions, which in turn requires introduction of a
> non-malleable TxID
> that excludes miner-adjustable export/import outputs.
> The rules above have a nice side effect that miners, being the minters of
> confidential coins,
> can sell them at a premium, which creates an incentive for them to
> actually support
> that feature and work on improving performance of rangeproof validation
> (e.g. in GPUs).
> Would love to hear comments and criticism of that approach.
> Thanks!
> Oleg.
> ___
> bitcoin-dev mailing list
bitcoin-dev mailing list

Re: [bitcoin-dev] Managing block size the same way we do difficulty (aka Block75)

2016-12-11 Thread Adam Back via bitcoin-dev
Well I think empirical game-theory observed on the network involves more
types of strategy than honest vs dishonest.  At least 4, maybe 5 types of
strategy and I would argue lumping the strategies together results in
incorrect game theory conclusions and predictions.

A) altruistic players (protocol following by principle to be good network
citizens, will forgo incremental profits to aid network health) eg aim to
decentralize hashrate, will mine stuck transactions for free, run pools
with zero fee, put more effort into custom spam filtering, tend to be power
users, or long term invested etc.

B) honest players (protocol following but non-altruistic or just
lazy/asleep run default software, but still leaving some dishonest profit
untaken). Eg reject spy mining, but no charitable actions, will not
retaliate in kind to semi-honest zero sum attacks that reduce their profits.

C) semi-honest (will violate protocol if their attack can be plausibly
deniable or argued to be not hugely damaging to network security). Eg spy
mining, centralised pools increasing other miners orphan rates.

D) rational players (will violate the protocol for profit: will not overtly
steal from users via double spends, but anything short particularly
disadvantaging other miners even if it results in centralisation is treated
as fair game) eg selfish mining. Would increase block size by filling with
pay to self transactions, if it increased orphans for others.

E) dishonest players (aka hyper-rational: will actually steal from users
probabilistically if possible, not as worried about detection). Eg double
spend and probabilistic double spends (against onchain gambling games).
Would DDoS competing pools.

In part the strategies depend on investment horizon, it is long term
rational for altruistic behavior to forgo incremental short term profit to
improve user experience.  Hyper-rational to buy votes in a "ends justify
means" mentality though fortunately most network players are not dishonest.

So called meta-incentive (unwillingness to risk hurting bitcoin due to
intended long term ho dling coins or ASICs) can also explain bias towards
honest or altruistic strategies.

Renting too much hashrate is risky as it can avoid the meta-incentive and
increase rational or dishonest strategies.

In particular re differentiating from 51% attack so long as > 50% are
semi-honest, honest or altruistic it won't happen.  It would seem actually
that > 66-75% are because we have not seen selfish mining on the network.
Though I think conveniently slow block publication by some players in the
60% spy mining semi-honest cartel was seen for a while, the claim has been
it was short-lived and due to technical issue.

It would be interesting to try to categorise and estimate the network %
engaging in each strategy.  I think the information is mostly known.


On Dec 11, 2016 03:22, "Daniele Pinna via bitcoin-dev" <> wrote:

> How is the adverse scenario you describe different from a plain old 51%
> attack? Each proposed protocol change  where 51% or more  of the network
> can potentially game the rules and break the system should be considered
> just as acceptable/unacceptable as another.
> There comes a point where some form of basic honesty must be assumed on
> behalf of participants benefiting from the system working properly and
> reliably.
> Afterall, what magic line of code prohibits all miners from simultaneously
> turning all their equipment off...  just because?
> Maybe this 'one':
> "As long as a majority of CPU power is controlled by nodes that are not
> cooperating to attack the network, they'll generate the longest chain and
> outpace attackers. The network itself requires minimal structure."
> Is there such a thing as an unrecognizable 51% attack?  One where the
> remaining 49% get dragged in against their will?
> Daniele
> On Dec 10, 2016 6:39 PM, "Pieter Wuille"  wrote:
>> On Sat, Dec 10, 2016 at 4:23 AM, Daniele Pinna via bitcoin-dev <
>>> wrote:
>>> We have models for estimating the probability that a block is orphaned
>>> given average network bandwidth and block size.
>>> The question is, do we have objective measures of these two quantities?
>>> Couldn't we target an orphan_rate < max_rate?
>> Models can predict orphan rate given block size and network/hashrate
>> topology, but you can't control the topology (and things like FIBRE hide
>> the effect of block size on this as well). The result is that if you're
>> purely optimizing for minimal orphan rate, you can end up with a single
>> (conglomerate of) pools producing all the blocks. Such a setup has no
>> propagation delay at all, and as a result can always achieve 0 orphans.
>> Cheers,
>> --
>> Pieter
> ___
> bitcoin-dev mailing list

Re: [bitcoin-dev] BIP proposal: Increase block size limit to 2 megabytes

2016-02-06 Thread Adam Back via bitcoin-dev
Hi Gavin

It would probably be a good idea to have a security considerations
section, also, is there a list of which exchange, library, wallet,
pool, stats server, hardware etc you have tested this change against?

Do you have a rollback plan in the event the hard-fork triggers via
false voting as seemed to be prevalent during XT?  (Or rollback just
as contingency if something unforseen goes wrong).

How do you plan to monitor and manage security through the hard-fork?


On 6 February 2016 at 16:37, Gavin Andresen via bitcoin-dev
> Responding to "28 days is not long enough" :
> I keep seeing this claim made with no evidence to back it up.  As I said, I
> surveyed several of the biggest infrastructure providers and the btcd lead
> developer and they all agree "28 days is plenty of time."
> For individuals... why would it take somebody longer than 28 days to either
> download and restart their bitcoind, or to patch and then re-run (the patch
> can be a one-line change MAX_BLOCK_SIZE from 100 to 200)?
> For the Bitcoin Core project:  I'm well aware of how long it takes to roll
> out new binaries, and 28 days is plenty of time.
> I suspect there ARE a significant percentage of un-maintained full nodes--
> probably 30 to 40%. Losing those nodes will not be a problem, for three
> reasons:
> 1) The network could shrink by 60% and it would still have plenty of open
> connection slots
> 2) People are committing to spinning up thousands of supports-2mb-nodes
> during the grace period.
> 3) We could wait a year and pick up maybe 10 or 20% more.
> I strongly disagree with the statement that there is no cost to a longer
> grace period. There is broad agreement that a capacity increase is needed
> NOW.
> To bring it back to bitcoin-dev territory:  are there any TECHNICAL
> arguments why an upgrade would take a business or individual longer than 28
> days?
> Responding to Luke's message:
>> On Sat, Feb 6, 2016 at 1:12 AM, Luke Dashjr via bitcoin-dev
>>  wrote:
>> > On Friday, February 05, 2016 8:51:08 PM Gavin Andresen via bitcoin-dev
>> > wrote:
>> >> Blog post on a couple of the constants chosen:
>> >>
>> >
>> > Can you put this in the BIP's Rationale section (which appears to be
>> > mis-named
>> > "Discussion" in the current draft)?
> I'll rename the section and expand it a little. I think standards documents
> like BIPs should be concise, though (written for implementors), so I'm not
> going to recreate the entire blog post there.
>> >
>> >> Signature operations in un-executed branches of a Script are not
>> >> counted
>> >> OP_CHECKMULTISIG evaluations are counted accurately; if the signature
>> >> for a
>> >> 1-of-20 OP_CHECKMULTISIG is satisified by the public key nearest the
>> >> top
>> >> of the execution stack, it is counted as one signature operation. If it
>> >> is
>> >> satisfied by the public key nearest the bottom of the execution stack,
>> >> it
>> >> is counted as twenty signature operations. Signature operations
>> >> involving
>> >> invalidly encoded signatures or public keys are not counted towards the
>> >> limit
>> >
>> > These seem like they will break static analysis entirely. That was a
>> > noted
>> > reason for creating BIP 16 to replace BIP 12. Is it no longer a concern?
>> > Would
>> > it make sense to require scripts to commit to the total accurate-sigop
>> > count
>> > to fix this?
> After implementing static counting and accurate counting... I was wrong.
> Accurate/dynamic counting/limiting is quick and simple and can be completely
> safe (the counting code can be told the limit and can "early-out"
> validation).
> I think making scripts commit to a total accurate sigop count is a bad
> idea-- it would make multisignature signing more complicated for zero
> benefit.  E.g. if you're circulating a partially signed transaction to that
> must be signed by 2 of 5 people, you can end up with a transaction that
> requires 2, 3, 4, or 5 signature operations to validate (depending on which
> public keys are used to do the signing).  The first signer might have no
> idea who else would sign and wouldn't know the accurate sigop count.
>> >
>> >> The amount of data hashed to compute signature hashes is limited to
>> >> 1,300,000,000 bytes per block.
>> >
>> > The rationale for this wasn't in your blog post. I assume it's based on
>> > the
>> > current theoretical max at 1 MB blocks? Even a high-end PC would
>> > probably take
>> > 40-80 seconds just for the hashing, however - maybe a lower limit would
>> > be
>> > best?
> It is slightly more hashing than was required to validate block number
> 364,422.
> There are a couple of advantages to a very high limit:
> 1) When the fork is over, special-case code for dealing with old blocks can
> be eliminated, because all old blocks satisfy the new limit.
> 2) 

Re: [bitcoin-dev] Time to worry about 80-bit collision attacks or not?

2016-01-08 Thread Adam Back via bitcoin-dev
Tricky choice. On the one hand I had spotted this too before and maybe
one or two more exceptions to bitcoin's 128-bit security target and
been vaguely tut-tutting about them in the background.  It's kind of a
violation of crypto rule of thumb that you want to balance things and
not have odd weak points as Watson was implying, it puts you closer to
the margin if there is a slip or other problem so you have an
imbalanced crypto format.

On the other hand it's not currently a problem as such and it's less
change and slightly more compact.

RIPEMD probably is less well reviewed than SHA2.  However SHA1 has
problems, and SHA2 is a bigger SHA1 basically so, hence the NIST
motivation for SHA3 designed to fix the design flaw in SHA1 (and SHA2
in principle).

So then if we agree with this rule of thumb (and not doing so would
fly against best practices which we probably shouldnt look to do in
such a security focussed domain) then what this discussion is more
about is when is a good time to write down tech debt.

I think that comes to segregated-witness itself which writes down a
tidily organised by lines of code robust fix to a bunch of long
standing problems.

Doing a 2MB hard-fork in comparison fixes nothing really.  Leaving
known issues to bake in for another N years eventually builds up on
you (not even in security just in software engineering) as another
rule of thumb.  I mean if we dont fix it now that we are making a
change that connects, when will we?

In software projects I ran we always disguised the cost of tech-debt
as non-negotiable baked into our estimates without a line item to
escape the PHB syndrome of haggling for features instead of tech debt
(which is _never_ a good idea:)

Pragmatism vs refactoring as you go.

But for scale I think segregated-witness does offer the intriguing
next step of being able to do 2 of 2, 3 of 3 and N of N which give
size of one sig multisig (indistinguishable even for privacy) as well
as K of N key tree sigs, which are also significantly more compact.

There was also the other thing I mentioned further up the thread that
if we want to take an approach of living with little bit of bloat from
getting back to a universal 128-bit target, there are still some
fixable bloat things going on:
a) sending pubKey in the signature vs recovery (modulo interference
with Schnorr batch verify compatibility*);
b) using the PubKey instead of PKH in the ScriptPubKey, though that
loses the nice property of of not having the key to do DL attacks on
until the signed transaction is broadcast;
c) I think there might be a way to combine hash & PubKey to keep the
delayed PubKey publication property and yet still save the bloat of
having both.

* I did suggest to Pieter that you could let the miner decide to forgo
Schnorr batch verifiability to get compaction from recovery - the pub
key could be optionally elided from the scriptSig serialisation by the

The other thing we could consider is variable sized hashes (& a few
pubkey size choices) that is software complexity however.  We might be
better of focussing on the bigger picture like IBLT/weak-blocks and
bigger wins like MAST, multiSig Schnorr & key tree sigs.

Didnt get time to muse on c) but a nice crypto question for someone :)

Another thing to note is combining has been known to be fragile to bad
interactions or unexpected behaviours.  This paper talks about things
tradeoffs and weaknesses in hash combiners.

Weak concept NACK I think for losing a cleanup opportunity to store it
up for the future when there is a reasonable opportunity to fix it?


On 8 January 2016 at 15:34, Watson Ladd via bitcoin-dev
> On Fri, Jan 8, 2016 at 4:38 AM, Gavin Andresen via bitcoin-dev
>  wrote:
>> On Fri, Jan 8, 2016 at 7:02 AM, Rusty Russell  wrote:
>>> Matt Corallo  writes:
>>> > Indeed, anything which uses P2SH is obviously vulnerable if there is
>>> > an attack on RIPEMD160 which reduces it's security only marginally.
>>> I don't think this is true?  Even if you can generate a collision in
>>> RIPEMD160, that doesn't help you since you need to create a specific
>>> SHA256 hash for the RIPEMD160 preimage.
>>> Even a preimage attack only helps if it leads to more than one preimage
>>> fairly cheaply; that would make grinding out the SHA256 preimage easier.
>>> AFAICT even MD4 isn't this broken.
>> It feels like we've gone over that before, but I can never remember where or
>> when. I believe consensus was that if we were using the broken MD5 in all
>> the places we use RIPEMD160 we'd still be secure today because of Satoshi's
>> use of nested hash functions everywhere.
>>> But just with Moore's law (doubling every 18 months), we'll worry about
>>> economically viable attacks in 20 years.[1]
>>> That's far enough 

[bitcoin-dev] fork types (Re: An implementation of BIP102 as a softfork.)

2015-12-30 Thread Adam Back via bitcoin-dev
> I guess the same could be said about the softfork flavoured SW implementation

No, segregated witness is a
soft-fork maybe loosely similar to P2SH - particularly it is backwards
and forwards compatible by design.

These firm forks have the advantage over hard forks that there is no
left-over weak chain that is at risk of losing money (because it
becomes a consensus rule that old transactions are blocked).

There is also another type of fork a firm hard fork that can do the
same but for format changes that are not possible with a soft-fork.

Extension blocks show a more general backwards and forwards compatible
soft-fork is also possible.
Segregated witness is simpler.


On 30 December 2015 at 13:57, Marcel Jamin via bitcoin-dev
> I guess the same could be said about the softfork flavoured SW
> implementation. In any case, the strategy pattern helps with code structure
> in situations like this.
> 2015-12-30 14:29 GMT+01:00 Jonathan Toomim via bitcoin-dev
> :
bitcoin-dev mailing list

Re: [bitcoin-dev] Segregated Witness in the context of Scaling Bitcoin

2015-12-17 Thread Adam Back via bitcoin-dev
While it is interesting to contemplate moving to a world with
hard-fork only upgrades (deprecate soft-forks), now is possibly not
the time to consider that.  Someone can take that topic and make a
what-if sketch for how it could work and put it on the wishlist wiki
if its not already there.

We want to be pragmatic and constructive to reach consensus and that
takes not mixing in what-ifs or orthogonal long standing problems into
the mix, as needing to be fixed now.


On 17 December 2015 at 19:52, Jeff Garzik via bitcoin-dev
> On Thu, Dec 17, 2015 at 1:46 PM, jl2012  wrote:
>> This is not correct.
>> As only about 1/3 of nodes support BIP65 now, would you consider CLTV tx
>> are less secure than others? I don't think so. Since one invalid CLTV tx
>> will make the whole block invalid. Having more nodes to fully validate
>> non-CLTV txs won't make them any safer. The same logic also applies to SW
>> softfork.
> Yes - the logic applies to all soft forks.  Each soft fork degrades the
> security of non-upgraded nodes.
> The core design of bitcoin is that trustless nodes validate the work of
> miners, not trust them.
> Soft forks move in the opposite direction.  Each new soft-forked feature
> leans very heavily on miner trust rather than P2P network validation.
> ___
> bitcoin-dev mailing list
bitcoin-dev mailing list

Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-14 Thread Adam Back via bitcoin-dev
I think someone, maybe Pieter, commented on this relay issue that it
would be likely very transitory, as a lot of stuff would be fairly
quickly upgraded in practice from previous deployment experience, and
I think anyway there is a huge excess connectivity and capacity in the
p2p network vs having a connected network of various versions, and
supporting SPV client load (SPV load is quite low relative to
capacity, even one respectable node can support a large number of SPV

(Ie so two classes of network node and connectivity wouldnt be a
problem in practice even if it did persist; also the higher capacity
better run nodes are more likely to upgrade due to having more clued
in power user, miner, pool or company operators).

Maybe someone more detailed knowledge could clarify further.


On 14 December 2015 at 19:21, Jonathan Toomim via bitcoin-dev
> This means that a server supporting SW might only hear of the tx data and
> not get the signature data for some transactions, depending on how the relay
> rules worked (e.g. if the SW peers had higher minrelaytxfee settings than
> the legacy peers). This would complicate fast block relay code like IBLTs,
> since we now have to check to see that the recipient has both the tx data
> and the witness/sig data.
> The same issue might happen with block relay if we do SW as a soft fork. A
> SW node might see a block inv from a legacy node first, and might start
> downloading the block from that node. This block would then be marked as
> in-flight, and the witness data might not get downloaded. This shouldn't be
> too hard to fix by creating an inv for the witness data as a separate
> object, so that a node could download the block from e.g. Peer 1 and the
> segwit data from Peer 2.
> Of course, the code would be simpler if we did this as a hard fork and we
> could rely on everyone on the segwit fork supporting the segwit data.
> Although maybe we want to write the interfaces in a way that supports some
> nodes not downloading the segwit data anyway, just because not every node
> will want that data.
> I haven't had time to read sipa's code yet. I apologize for talking out of a
> position of ignorance. For anyone who has, do you feel like sharing how it
> deals with these network relay issues?
> By the way, since this thread is really about SegWit and not about any other
> mechanism for increasing Bitcoin capacity, perhaps we should rename it
> accordingly?
> On Dec 12, 2015, at 11:18 PM, Mark Friedenbach via bitcoin-dev
>  wrote:
> A segwit supporting server would be required to support relaying segwit
> transactions, although a non-segwit server could at least inform a wallet of
> segwit txns observed, even if it doesn't relay all information necessary to
> validate.
> Non segwit servers and wallets would continue operations as if nothing had
> occurred.
> If this means essentially that a soft fork deployment of SegWit will require
> SPV wallet servers to change their logic (or risk not being able to send
> payments) then it does seem to me that a hard fork to deploy this non
> controversial change is not only cleaner (on the data structure side) but
> safer in terms of the potential to affect the user experience.
> — Regards,
> ___
> bitcoin-dev mailing list
bitcoin-dev mailing list

Re: [bitcoin-dev] BIP - Block size doubles at each reward halving with max block size of 32M

2015-11-14 Thread Adam Back via bitcoin-dev
There is a difference between miners signalling intent (as they have
been for various BIPs, which is mostly informational only - they are
mostly not running the code, and in some cases it is not implemented,
so they cant be) there is a difference between that and a 95% miner
majority consensus rule.  Former can be useful information as you
said, latter implies as Luke described something that is not really
accurate, it is not strictly only a miner upgrade needed for basic
safety as with soft-forks.  If you look at BIP 103 for example it is
flag day based, and I think this is a more accurate approach.  Also
with miner votes they can be misleading - vote for one thing, but run
something else; what they are running is not generally
detectable/enforceable - see for example what happened with the BIP66
accidental fork due to "SPV mining" (ie validationless mining).

A hard-fork is for everyone to upgrade and talk with each other to see
that the vast majority is on the same plan which includes users,
ecosystem companies & miners.


On 14 November 2015 at 01:02, digitsu412 via bitcoin-dev
> Well I'd like to think that with an economy all parts of it interact with
> each other in ways more complex than simplistic imperative logic.
> I agree that the economic majority is essentially what matters in a hard
> fork but everyone (miners,devs,public thought leaders,businesses) is part of
> that economy. Additionally what miners signal as their intention affects the
> decision of that economic majority (and vice versa).  You can see the
> effects of this in traditional political processes in how preliminary vote
> polling results affect (reinforce) the final vote.
> We also can see the results of this in (dare I mention) the whole XT affair
> which had the signed intent of many of the economy (payment processors and
> wallets and one miner pool) and the rest of the miners did not go along with
> it. This experiment either means that the rest of the miners couldn't be
> bothered to signal at all (because they didn't know how) or they were
> affected by the influence of core devs or the opinions of others on the
> matter and rejected the economic majority.  (Which would imply core devs
> have some power by way of indirect influence) I would be inclined to believe
> the latter was more likely.
> The conclusion which this would seem to imply is that at the very least,
> miners matter (to what exact extent is debatable).  And although there is no
> direct control of any party over the other in the strict sense, the public
> vocal opinions of any part of the Bitcoin economy does have an effect in its
> ability to sway the opinions of the other parts.
> Digitsu
> — Regards,
> On Sat, Nov 14, 2015 at 7:29 AM, Luke Dashjr  wrote:
>> On Friday, November 13, 2015 4:01:09 PM wrote:
>> > Forgive the frankness but I don't see why signaling your intent to
>> > support
>> > an upgrade to one side of a hard fork can be seen as a bad thing. If for
>> > nothing else doesn't this make for a smoother flag day? (Because once
>> > you
>> > signal your intention, it makes it hard to back out on the commitment.)
>> It isn't a commitment in any sense, nor does it make it smoother, because
>> for
>> a hardfork to be successful, it is the *economy* that must switch
>> entirely.
>> The miners are unimportant.
>> > If miners don't have any choice in hard forks, who does? Just the core
>> > devs?
>> Devs have even less of a choice in the matter. What is relevant is the
>> economy: who do people want to spend their bitcoins with? There is no
>> programmatic way to determine this, especially not in advance, so the best
>> we
>> can do is a flag day that gets called off if there isn't clear consensus.
>> Luke
> ___
> bitcoin-dev mailing list
bitcoin-dev mailing list

Re: [bitcoin-dev] [BIP] Normalized transaction IDs

2015-11-05 Thread Adam Back via bitcoin-dev
About the conflicting spends by the private key holder (self signature
malleability) that is in principle kind of fixable.

You make a new pub key type which is r,Q (where r is the DSA signature
component but chosen at key gen time, Q=xG is the pub key, r is point
compressed R = (r,f(r)) = kG ), r is the pre-computable part of an
ECDSA signature (unrelated to the message which can be decided later).

You make a new address type which is a = H(r,Q).

Then you make a new signature type which requires that the r from
sig=(r,s) matches the r committed to in the address.

As the ECDSA signature is s=(H(m)+r*x)/k mod n, if they sign two
different messages with the same r value they reveal the private key
via simultaneous equation, as s=(H(m)+r*x)/k and s'=(H(m')+r*x)/k and
solving k=(H(m)-H(m'))/(s-s') and x=(sk-H(m))/r allowing anyone who
sees both double spends to spend as they can replace the signature
with their own one.  That converts double signatures into miner can

It doesnt necessarily enforce no pubkey reuse (Q), as a=H(r,Q) and
a'=H(r',Q) are different addresses, though it does enforce no
extended-address reuse (H=(r,Q)).
Binary failure address reuse could be an issue.  Puts pressure on
transactional storage on wallets.


On 5 November 2015 at 20:36, Luke Dashjr via bitcoin-dev
> On Thursday, November 05, 2015 3:27:37 PM Jorge Timón wrote:
>> On Tue, Nov 3, 2015 at 11:01 PM, Luke Dashjr via bitcoin-dev
>>  wrote:
>> > On Tuesday, November 03, 2015 9:44:02 PM Christian Decker wrote:
>> >> So this is indeed a form of desired malleability we will likely not be
>> >> able to fix. I'd argue that this goes more into the direction of
>> >> double-spending than a form of malleability, and is mostly out of scope
>> >> for this BIP. As the abstract mentions this BIP attempts to eliminate
>> >> damage incurred by malleability in the third party modification
>> >> scenario and in the multisig scenario, with the added benefit of
>> >> enabling transaction templating. If we can get the segregated witnesses
>> >> approach working all the better, we don't even have the penalty of
>> >> increased UTXO size. The problem of singlesig users doublespending
>> >> their outputs to update transactions remains a problem even then.
>> >
>> > I don't know what you're trying to say here. Double spending to the same
>> > destination(s) and malleability are literally the same thing. Things
>> > affected by malleability are still just as broken even with this BIP -
>> > whether it is triggered by a third-party or not is not very relevant.
>> I think this is just a terminology confusion.
>> There's conflicting spends of the same outputs (aka unconfirmed
>> double-spends), and there's signature malleability which Segregated
>> Witnesses solves.
>> If we want to define malleability as signature malleability +
>> conflicting spends, then that's fine.
>> But it seems Christian is mostly interested in signature malleability,
>> which is what SW can solve.
>> In fact, creating conflicting spends is sometimes useful for some
>> contracts (ie to cancel the contract when that's supposed to be
>> allowed).
>> Maybe it is "incorrect" that people use "malleability" when they're
>> specifically talking about "signature malleability", but I think that
>> in this case it's clear that we're talking about transactions having
>> an id that cannot be changed just by signing with a different nonce
>> (what SW provides).
> Ok, then my point is that "signature malleability" is not particularly
> problematic or interesting alone, and the only way to get a practically-useful
> solution, is to address all kinds of malleability.
> Luke
> ___
> bitcoin-dev mailing list
bitcoin-dev mailing list

[bitcoin-dev] soft-fork security (Re: Let's deploy BIP65 CHECKLOCKTIMEVERIFY!)

2015-10-07 Thread Adam Back via bitcoin-dev
On 7 October 2015 at 18:26, Jonathan Toomim (Toomim Bros) via
bitcoin-dev  wrote:
> On Oct 7, 2015, at 9:02 AM, Eric Lombrozo  wrote:
> If you had a 99% hashpower supermajority on the new version, an attacker
> would still be able to perform this attack once per day.

[ie wait for a non-upgraded miner to win a block]

I dont think that is something strong and new to focus on or worry
about, because in Bitcoin's game theory there are lets say 3 types of
miners we're in aggregate trying to get security from:

a) honest (following protocol) bolstered by financial incentive to
remain honest of subsidy & fees
b) agnostic / lazy (just run software, upgrade when they lose money
and/or get shouted at)
c) dishonest

Bitcoin remains secure with various combinations of percentages.  For
sure you wont have a good time if you assume < 1% are dishonest.

Therefore this attack can already happen, and in fact has.  Users of
bitcoin must behave accordingly with confirmations.

Bitcoin direct is not super secure for unconfirmed (so-called
0-confirm) transactions, or even for 1-confirm transactions.  See also
Finney attack.

That does not prevent people using unconfirmed transactions with risk
scoring, or in high trust settings, or high margin businesses selling
digital artefacts or physical with nominal incremental cost.

But it does mean that one has to keep that in mind.  And it also
motivates lightning network or payment channels (lightning with one
intermediate node vs a network of nodes) - they can provide basically
instant 0-confirm securely, and that seems like the future.

In my opinion anyone relying on unconfirmed transactions needs to
monitor for problems, and have some plan B or workaround if the fraud
rates shoot up (if someone tries to attack it in an organised way),
and also a plan C mid-term plan to do something more robust.  Some
people are less charitable and want to kill unconfirmed transactions
immediately.  The message is the same ultimately.

bitcoin-dev mailing list

[bitcoin-dev] extension-blocks/sidechains & fractional/coin-cap/demurrage (Re: Let's deploy BIP65 CHECKLOCKTIMEVERIFY!)

2015-10-07 Thread Dr Adam Back via bitcoin-dev
Micha I think you are correct, I dont think extension blocks (or
sidechains for that matter) can allow soft-fork increase of the total
Bitcoins in the system, because the main chain still enforces the 21m
coin cap.  A given extension block could go fractional, but if there
was a run to get out, the last users out will lose, or they'll all
take a hair-cut etc.  So presumably users would decline to use an
extension block with fractional bitcoin.

I mean you could view it like say an exchange (mtgox?) that somehow
accidentally or intentionally creates fictional Bitcoin IOUs in it's
system, eg in some kind of pyramid scheme - that doesnt create more
Bitcoins, it just means people who think they have IOUs for real
Bitcoins, are fractional or fake.  With an extension block or
sidechain furthermore it is transparent so they will know they are

Relatedly it seems possible to implement a sidechain with advertised
demurrage, with an exit or entrance fee to discourage holding outside
of the chain to avoid demurrage.  There are apparently economic
arguments for why people might opt in to that (higher velocity
economic activity, gresham's law, merchants offering discounts for
buying with demurrage Bitcoins, maybe lower per transaction fees
because say miners can mine the demurrage).  However that is a
different topic, again not changing the number of coins in


On 7 October 2015 at 08:13, Micha Bailey via bitcoin-dev
> On Monday, October 5, 2015, Mike Hearn via bitcoin-dev
>  wrote:
>>> As Greg explained to you repeatedly, a softfork won't cause a
>>> non-upgraded full node to start accepting blocks that create more
>>> subsidy than is valid.
>> It was an example. Adam Back's extension blocks proposal would, in fact,
>> allow for a soft forking change that creates more subsidy than is valid (or
>> does anything else) by hiding one block inside another.
> Maybe I'm missing something, but wouldn't this turn into a hard fork the
> moment you try to spend an output created in one of these extension blocks?
> So sure, the block that contains the extension would be considered valid,
> but unupgraded validators will not update the UTXO set accordingly, meaning
> that those new TXOs can't be spent because, according to their rules, they
> don't exist.
bitcoin-dev mailing list

Re: [bitcoin-dev] on rough consensus

2015-10-07 Thread Adam Back via bitcoin-dev
Thank you for posting that, most informative, and suggest people
arguing here lately to read it carefully.

May I suggest that people who wish to debate what rough consensus
means, to take it to this reddit thread

Thanks again for posting, helpful context/reminder for all.


On 7 October 2015 at 07:07, Ryan Grant via bitcoin-dev
> Bitcoin's participants can improve their ability to stay on a valuable
> and censorship resistant blockchain by individually and informally
> absorbing cultural wisdom regarding "rough consensus".  This does not
> require writing any formal rules about what rough consensus is.  It is
> a matter of participation with an understanding.
bitcoin-dev mailing list

Re: [bitcoin-dev] Dev-list's stance on potentially altering the PoW algorithm

2015-10-02 Thread Adam Back via bitcoin-dev
See also


On 2 October 2015 at 10:20, Jorge Timón
> On Oct 2, 2015 10:03 AM, "Daniele Pinna via bitcoin-dev"
>  wrote:
>> should an algorithm that guarantees protection from ASIC/FPGA optimization
>> be found.
> This is demonstrably impossible: anything that can be done with software can
> be done with hardware. This is computer science 101.
> And specialized hardware can always be more efficient, at least energy-wise.
> On the other hand, BIP99 explicitly contemplates "anti-miner hardforks"
> (obviously not for so called "ASIC-resistance" [an absurd term coined to
> promote some altcoins], but just for restarting the ASIC and mining market
> in case mining becomes too centralized).
> ___
> bitcoin-dev mailing list
bitcoin-dev mailing list

Re: [bitcoin-dev] Let's deploy BIP65 CHECKLOCKTIMEVERIFY!

2015-09-30 Thread Adam Back via bitcoin-dev
On 30 September 2015 at 13:11, Mike Hearn via bitcoin-dev
>> Have I missed a proposal to change BIP101 to be a real hardfork
> There's no such thing as a "real" hard fork - don't try and move the goal
> posts. SPV clients do not need any changes to do the right thing with BIP
> 101, they will follow the new chain automatically, so it needs no changes.

BIP101 is a hybrid: in some ways it is a hard-fork and in other ways
it is a soft-fork.  It is a hard-fork to full-nodes, but also a
soft-fork to SPV clients, as by definition the SPV miners are having
changes made whether they approve or not as they are not even aware of
the change.

> To repeat: CLTV does not have consensus at the moment.

I think people are saying CLTV is long discussed and does have consensus.

> Several people have asked several times now: given the very real and widely
> acknowledged downsides that come with a soft fork, what is the specific
> benefit to end users of doing them?
> Until that question is answered to my satisfaction I continue to object to
> this BIP on the grounds that the deployment creates financial risk
> unnecessarily.

Let's not conflate CLTV with a discussion about future possible
deployment methods.  Forks are an interesting but different topic.

Soft-forks have a lot of mileage on them at this point, hard-forks do
not, and are anyway inherently higher riskier, even ignoring our lack
of practical experience with planned hard-forks.

With a soft-fork, while it's clear there is a temporary security model
reduction for SPV nodes (and non-upgraded full nodes) in the period
before they upgrade, this is preferable to the risks of a system-wide
coordinated hard-fork upgrade.  There is some limit if the complexity
of soft-forking a feature is quite complicated (eg one could argue
that with soft-fork extension-blocks vs hard-fork method of increasing
block-size for example).  So the balance, which I think is easily met
with CLTV, is that soft-fork is simple-enough technically and the
feature is entirely non-controversial and additive functionality
improvement without downside or reason for dissent.

To my view this is an answer to your question "what is the specific
benefit to end users of doing [soft-forks]" -- it is a lower risk, and
therefore faster way to deploy non-controversial (additive) changes.

Given the CLTV is useful for improving lightning efficiency this is
good for improving Bitcoin's scalability.

bitcoin-dev mailing list

Re: [bitcoin-dev] Let's deploy BIP65 CHECKLOCKTIMEVERIFY!

2015-09-30 Thread Adam Back via bitcoin-dev
I think from discussion with Gavin sometime during the montreal
scaling bitcoin workshop, XT maybe willing to make things easy and
adapt what it's doing.  For example in relation to versionBits Gavin
said he'd be willing to update XT with an updated/improved
versionBits, for example.

It seems more sensible to do what is simple and clean and have both
core do that, and XT follow if there is no particular philosophy
debate on a given technical topic.  This seems a quite constructive


On 30 September 2015 at 00:05, Rusty Russell via bitcoin-dev
> "Wladimir J. van der Laan via bitcoin-dev"
>  writes:
>> On Sun, Sep 27, 2015 at 02:50:31PM -0400, Peter Todd via bitcoin-dev wrote:
>>> It's time to deploy BIP65 CHECKLOCKTIMEVERIFY.
>> There appears to be common agreement on that.
>> The only source of some controversy is how to deploy: versionbits versus
>> IsSuperMajority. I think the versionbits proposal should first have code
>> out there for longer before we consider it for concrete softforks. Haste-ing
>> along versionbits because CLTV is wanted would be risky.
> Agreed.  Unfortunately, a simple "block version >= 4" check is
> insufficient, due to XT which sets version bits 001111.
> Given that, I suggest using the simple test:
> if (pstart->nVersion & 0x8)
> ++nFound;
> Which means:
> 1) XT won't trigger it.
> 2) It won't trigger XT.
> 3) You can simply set block nVersion to 8 for now.
> 4) We can still use versionbits in parallel later.
> Cheers,
> Rusty.
> ___
> bitcoin-dev mailing list
bitcoin-dev mailing list

Re: [bitcoin-dev] Let's deploy BIP65 CHECKLOCKTIMEVERIFY!

2015-09-30 Thread Adam Back via bitcoin-dev
I was talking about the versionBits from Rusty's email (pasted below) and
simplifying that by XT adopting the patch as Gavin had seemed agreeable to.


Rusty wrote:
> Agreed.  Unfortunately, a simple "block version >= 4" check is
> insufficient, due to XT which sets version bits 001111.
> Given that, I suggest using the simple test:
> if (pstart->nVersion & 0x8)
> ++nFound;
> Which means:
> 1) XT won't trigger it.
> 2) It won't trigger XT.
> 3) You can simply set block nVersion to 8 for now.
> 4) We can still use versionbits in parallel later.
bitcoin-dev mailing list

Re: [bitcoin-dev] Yet another blocklimit proposal / compromise

2015-09-11 Thread Adam Back via bitcoin-dev
Bitcoin security depends on the enforcement of consensus rules which
is done by economically dependent full nodes.  This is distinct from
miners fullnodes, and balances miners interests, otherwise SPV nodes
and decentralisation of policy would tend degrade, I think.  Therefore
it is important that it be reasonably convenient to run full nodes for
decentralisation security.

Also you may want to read this summary of Bitcoin decentralisation by Mark:

I think you maybe misunderstanding what the Chinese miners said also,
about 8MB, that was a cap on the maximum they felt they could handle
with current network infrastructure.

I had proposed 2-4-8MB growing over a 4 year time frame with 2MB once
the hard-fork is upgraded by everyone in the network.  (I dont
consider miner triggers, as with soft-fork upgrades, to be an
appropriate roll out mechanism because it is more important that
economically dependent full nodes upgrade, though it can be useful to
know that miners also have upgraded to a reasonable extent to avoid a
temporary hashrate drop off affecting security).


On 9 September 2015 at 15:00, Marcel Jamin via bitcoin-dev
> I think the overlap of people who want to run a serious mining operation and
> people who are unable to afford a slightly above average internet connection
> is infinitesimally small.
> 2015-09-09 20:51 GMT+02:00 Jorge Timón :
>> On Sep 9, 2015 8:36 PM, "Marcel Jamin via bitcoin-dev"
>>  wrote:
>> >
>> > I propose to:
>> >
>> > a) assess what blocklimit is currently technically possible without
>> > driving up costs of running a node up too much. Most systems currently
>> > running a fullnode probably have some capacity left.
>> What about the risk of further increasing mining centralization?
> ___
> bitcoin-dev mailing list
bitcoin-dev mailing list

[bitcoin-dev] Bitcoin threat modelling thread

2015-09-10 Thread Adam Back via bitcoin-dev

Came across this!topic/bitcoin-xt/zbPwfDf7UoQ useful
thread discussing Bitcoin threat modelling may reach wider audience on this

Text from Mike Hearn:

 think the next stage is to build a threat model for Bitcoin.

This mail starts with background. If you already know what a threat model
is you can skip to the last section where I propose a first draft, as the
starting point for discussion.

An intro to threat modelling

In security engineering, a threat model
 is a document that informally

Which adversaries (enemies) do you care about?What can they do?Why do they
want to attack you?As a result: what threats do they pose?How do you
prioritise these threats?

Establishing a threat model is an important part of any security
engineering project. In the early days of secure computing, threat
modelling hadn't been invented and as a result projects frequently hit the
following problem:

Every threat looked equally serious, so it became impossible to prioritise

Almost anything could become a threat, if you squinted right

So usability, performance, code maintainability etc were sacrificed over
and over to try and defend against absurd or very unlikely threats just
because someone identified one, in an endless race

The resulting product sucked and nobody used it, thus protecting people
from no threats at all

PGP is a good example of this problem in action.

Making good threat models isn't easy (see The Economist article,New Threat
Model Army
It can be controversial, because a threat model involves accepting that you
can't win all the time - there will exist adversaries that you
realistically cannot beat. Writing them down and not worrying about them
anymore liberates you to focus on other threats you might do a better job
at, or to work on usability, or features, or other things that users might
actually care about more.

You can make your threat model too weak, so it doesn't encompass real
threats your users are likely to encounter. But a much more common problem
is making the model too strong: including *too many* different kinds of
threats. Strangely, this can make your product *less* secure rather than

One reason is that with too many threats in your model, you can lose your
ability to prioritise: every threat seems equally important even if perhaps
really they aren't, and then you can waste time solving "threats" that are
absurd or incredibly unlikely.

Even worse, once people add things in to a threat model they hate taking
them out, because it'd imply that previous efforts were wasted.

The Tor threat model

A good example of this is Tor. As you my know I have kind of a love/hate
relationship with Tor. It's a useful thing, but I often feel they could do
things differently.

The Tor

* have
a threat model
, and
it is a very strong one. Tor tries to protect you against adversaries that
care about very small leaks of application level data, like a browser
reporting your screen size, because it sees its mission as making all
traffic look identical, rather than just hiding your IP address. As a
consequence of this threat model Tor is meant to be used with apps that are
specifically "Torified", like their Tor Browser which is based on Firefox.
If a user takes the obvious approach of just downloading and running the
Tor Browser Bundle, their iTunes traffic won't be anonymised. The rationale
is it's useless to route traffic of random apps via Tor because even if
that hides the IP address, the apps might leak private data anyway as they
weren't designed for it.

This threat model has a couple of consequences:

It's extremely easy to think you're hiding your IP address when in fact you
aren't, due to using or accidentally running non-Torified apps.

The Tor Browser is based on Firefox. When Chrome came along it had a
clearly superior security architecture, because it was sandboxed, but the
Tor project had made a big investment in customising Firefox to anonymise
things like screen sizes. They didn't want to redo all that work.

The end result of this is that Tor's adversaries discovered they could just
break Tor completely by hacking the web browser, as Firefox is the least
secure browser and yet it's the one the Tor project recommends. The Snowden
files contain a bunch of references to this.

Interestingly, the Tor threat model explicitly *excludes* the NSA because
it can observe the whole network (it is the so-called "global passive
adversary"). Tor does this because they want to 

Re: [bitcoin-dev] Dynamic limit to the block size - BIP draft discussion

2015-09-08 Thread Adam Back via bitcoin-dev
The maximum block-size is one that can be filled at zero-cost by
miners, and so allows some kinds of amplification of selfish-mining
related attacks.


On 8 September 2015 at 13:28, Ivan Brightly via bitcoin-dev
> This is true, but miners already control block size through soft caps.
> Miners are fully capable of producing smaller blocks regardless of the max
> block limit, with or without collusion. Arguably, there is no need to ever
> reduce the max block size unless technology advances for some reason reverse
> course - aka, WW3 takes a toll on the internet and the average bandwidth
> available halves. The likelihood of significant technology contraction in
> the near future seems rather unlikely and is more broadly problematic for
> society than bitcoin specifically.
> The only reason for reducing the max block limit other than technology
> availability is if you think that this is what will produce the fee market,
> which is back to an economic discussion - not a technology scaling
> discussion.
> On Tue, Sep 8, 2015 at 4:49 AM, Btc Drak via bitcoin-dev
>  wrote:
>> > but allow meaningful relief to transaction volume pressure in response
>> > to true market demand
>> If blocksize can only increase then it's like a market that only goes
>> up which is unrealistic. Transaction will volume ebb and flow
>> significantly. Some people have been looking at transaction volume
>> charts over time and all they can see is an exponential curve which
>> they think will go on forever, yet nothing goes up forever and it will
>> go through significant trend cycles (like everything does). If you
>> dont want to hurt the fee market, the blocksize has to be elastic and
>> allow contraction as well as expansion.
> ___
> bitcoin-dev mailing list
bitcoin-dev mailing list

Re: [bitcoin-dev] Let's kill Bitcoin Core and allow the green shoots of a garden of new implementations to grow from its fertile ashes

2015-09-01 Thread Adam Back via bitcoin-dev
Peter this seems to be a not well thought through course of action,
fun though it maybe informally or philosophically or to tweak various
peoples sensibilities.

Bitcoin is a consensus system that does not work if there are
incompatible versions of consensus code competing on the network.
This is why work is underway on libconsensus so we can see diversity
of implementation without the risk of incompatibility arising by
software defect.  It has proven quite hard to match independent
consensus implementations bit for bit against an adaptive adversary
looking for inconsistencies in interpretation.

In terms of protocol updates it is more constructive therefore that
people with a technical interest analyse and validate others proposals
via testing, or make their own proposals so that we can arrive at a
well validated upgrade mechanism that everyone upgrades to in a
coordinated fashion.

Previous intentional upgrades to bitcoin have been
backwards-compatible (via soft-fork which can be secured reasonably
using a miner vote trigger and temporary SPV security for those who
not yet upgraded) but the current topic of a throughput increase is
non-backwards-compatible (via a hard-fork), and the way to minimise
risk of such an upgrade is for everyone to try to upgrade well in
advance of an agreed upgrade schedule, and to be all upgrading to the
*same* consensus rule change.

Encouraging nodes or miners to "vote" by running a range of different
consensus rules isnt really constructive I feel - it alarms people who
understand the risks, sets things on a path that have to be fixed
while in flight by obvious implication, and isnt collaborative - so it
risks being a politicising suggestion on what should be a purely
technical topic of choosing the best approach, where it is best to
strive to keep things non-emotive and professional and technically
focussed.  Such calls are offending the technical sensibilities of
people who understand the risks.

Anyway lets try to keep things constructive and focus on analysing proposals.


On 31 August 2015 at 19:16, Peter R via bitcoin-dev
> I agree, s7r, that Bitcoin Core represents the most stable code base.  To
> create multiple implementations, other groups would fork Bitcoin Core
> similar to what Bitcoin XT did.  We could have:
> - Bitcoin-A (XT)
> - Bitcoin-B (Blockstream)
> - Bitcoin-C (promoting BIP100)
> - Bitcoin-D
> - etc.
> Innovation from any development group would be freely integrated by any
> other development group, if desired.  Of course, each group would have a
> very strong incentive to remain fork-wise compatible with the other
> implementations.
> In fact, this just gave me a great idea!  Since Wladimir has stated that he
> will not integrate a forking change into Core without Core Dev consensus, I
> suggest we work together to never reach consensus with Bitcoin Core.  This
> will provide impetus for new implementations to fork from Core (like XT did)
> and implement whatever scaling solution they deem best.  The users will then
> select the winning solution simply based on the code they choose to run.
> The other implementations will then rush to make compatible changes in order
> to keep their dwindling user bases.
> This is the decentralized spirit of Bitcoin in action.  Creative
> destruction.  Consensus formed simply by the code that gets run.
> Let's kill Bitcoin Core and allow the green shoots of a garden of new
> implementations to grow from its fertile ashes.
bitcoin-dev mailing list

Re: [bitcoin-dev] BIP 10X: Replace Transaction Fees with Data, Discussion Draft 0.2.9

2015-08-23 Thread Adam Back via bitcoin-dev
Some comments:

(i) remove any possibility of free transactions unless

associated with basic transaction data;

I believe it is not possible to prevent free transactions for the
reason that people can pay out of band (via existing banking transfers
to miners) or make payments to addresses belonging to miners (that are
contingent on the requested user transaction being processed via input
dependency) .

I am not sure I fully understand the way you see monetisation working,
and you do indicate this is quite far future what-if stage idea, and
you do identify a conflict with fungibility - but I think this is
probably quite badly in conflict with fungibility to the point of
conflicting with many planned Bitcoin improvements?  And mid term
technical directions.

I would say the long term idealised requirements are that the
transaction itself would have cryptographic fungibility, and policy
relating to identity for authorisation, approval in regulated
transactions would take place at the payment protocol layer.  The
payment protocol is already seeing some use.

Lightning protocol sees more of the data going point to point and so
not broadcast nor visible for big data analytic monetisation.


On 22 August 2015 at 23:51, Jorge Timón wrote:
 Again, did you got a bip number asigned or did you self-assigned it yourself?

 On Sat, Aug 22, 2015 at 1:01 PM, Ahmed Zsales via bitcoin-dev wrote:

 In response to public and private comments and feedback, we have updated
 this working draft.

 Update highlights:

 1. Specific clarifications on replacing the Coinbase subsidy and
 supplementing and not replacing transaction fees.

 2. Clarification on block chain overhead. The value of data mining is on a
 bell curve, so year six data will be removed every year.

 3. Added references to an ability to create global, national and regional
 Bitcoin Price Indices for popular baskets of goods transacted with Bitcoin.

 4. Added references for an ability to use structured block chain data for
 Bitcoin capacity and fork planning.

 5. Removed references to price speculation.

 6. Added preferences for deployment dates of January 2017 or January 2018.

 7. Moving towards BIP format after discussion and evaluation period.
 Technical content will increase in due course and discussion content will be

 Further views and feedback welcome.



 On Mon, Aug 17, 2015 at 5:23 PM, Ahmed Zsales


 Here we propose a long-term solution to replace mining rewards and
 transactions fees.

 BIP 104 is currently a discussion draft only.

 Views and feedback welcome.



 bitcoin-dev mailing list

 bitcoin-dev mailing list
bitcoin-dev mailing list

Re: [bitcoin-dev] Bitcoin XT 0.11A

2015-08-19 Thread Adam Back via bitcoin-dev
Wouldnt the experience for SPV nodes be chaotic?  If the full nodes
are 50:50 XT and bitcoin core, then SPV clients would connect at
random and because XT and core will diverge immediately after


On 19 August 2015 at 15:28, Jorge Timón wrote:
 On Wed, Aug 19, 2015 at 5:41 PM, s7r wrote:
 Hello Jorge, Eric,

 With all this noise on the -dev mail list I had to implement application
 level filters so I can treat with priority posts from certain people,
 you are on that list. While I agree with your arguments, I think it is
 _very_ important to highlight some things. I am neither for the
 blocksize increase neither against it, because plain and simple I don't
 have enough arguments to take some definitive decision on this topic.

 I think everyone is in that position (we just don't have enough data
 about the proposed sizes) or it's just too optimistic.

 What I am angry about is spreading FUD that a fork could kill Bitcoin
 and what we are experiencing now is somehow terrible.

 1. Bitcoin XT is not necessarily an attack over Bitcoin and will not
 split it into 2 different coins. It is the result of an open source free
 system which lacks centralization. It is just at early stage, it could
 have thousands for forks (or fork attempts) during its life.

 Bitcoin XT is just a software fork and nobody seem to have a problem
 with that (as repeated in other threads), people are worried about the
 way bip101 is going to be attempted to be deployed using Bitcoin XT.
 We already have more than 5000 software forks and that's totally fine.

 A Schism fork may not kill Bitcoin but it will certainly create 2
 different coins.
 The claim that there will be a winner and everybody will just move
 there is incredibly naive and uninformative.
 Many people will sell their xtbtc and reject the hardfork
 independently of its support by miners.
 Nobody knows what the result will be, but both currencies' prices
 dropping near zero is certainly a possibility that Gavin and Mike are
 not aware about or are not informing their followers about.
 Here's something a little bit longer about this topic:
 Note the last part:

 +This is very disruptive and hopefully will never be needed. But if
 +it's needed the best deployment path is just to activate the rule
 +changes after certain block height in the future. On the other hand,
 +it is healthy decentralization-wise that many independent software
 +projects are ready to deploy a schism hardfork.

 2. We have no proof that Mike Hearn and Gavin Andresen are trying to do
 something bad to Bitcoin. So far everything they have done is (or should
 be) allowed. They have forked an open source software and implemented a
 voting system for a consensus rule change - doesn't sound like they are
 committing a crime here (either legally or morally). If they are
 qualified enough to maintain the software, or if the decision is
 technically correct or not is another story, and it should only matter
 to whoever uses / wants to use -XT.

 Again, no problem with the code fork, but the Schism hardfork is very
 risky regardless of their intentions.

 3. If Bitcoin's value can be decreased (or Bitcoin as a project killed)
 just by 2 people forking the software and submitting a consensus rule to
 a vote, it means Bitcoin is dead already and it should be worthless! We
 can't go around and panic every time somebody forks Bitcoin and tries to
 change something - this should be allowed by the nature of its license.
 If tomorrow 5 more people fork 5 different software implementing the
 bitcoin protocol and submit 5 different new consensus rules to a vote,
 then what? We should all sell so the price will drop to 1 cent, because
 it is somehow not good enough, not stable enough?

 If they don't extensively lobby Bitcoin companies, they don't start a
 massive PR campaign labbeling other developers as obstructionists
 and don't misinform a big part of the Bitcoin users (often using
 logical fallacies, intentionally or not), probably those 5 new
 currencies will be ignored and nothing bad will happen.
 Unfortunately in this case a great division between users is being created.

 I can fork tomorrow Bitcoin Core to a Bitcoin-XYZ software which at some
 block in the future spends all the longest dusted coins to me, out of
 which I give away 50% to the miners (so the hashing power will have
 incentive to use my fork).

 Can they do it? YES
 Will they do it? NO
 Should the world care about this? NO

 It's as simple as that. We cannot continue to panic that Bitcoin as a
 project is at threat because somebody forked it.

 Can you please stop conflating Bitcoin Core as a project and
 Bitcoin consensus rules.
 They are different things and nobody is or can be in charge of the
 later, face it.

 Can you please also stop conflating software fork and

Re: [bitcoin-dev] Annoucing Not-BitcoinXT

2015-08-17 Thread Adam Back via bitcoin-dev
Thank you Eric for saying what needs to be said.

Starting a fork war is just not constructive and there are multiple
proposals being evaluated here.

I think that one thing that is not being so much focussed on is
Bitcoin-XT is both a hard-fork and a soft-fork.  It's a hard-fork on
Bitcoin full-nodes, but it is also a soft-fork attack on Bitcoin core
SPV nodes that did not opt-in.  It exposes those SPV nodes to loss in
the likely event that Bitcoin-XT results in a network-split.

The recent proposal here to run noXT (patch to falsely claim to mine
on XT while actually rejecting it's blocks) could add enough
uncertainty about the activation that Bitcoin-XT would probably have
to be aborted.


On 17 August 2015 at 15:03, Eric Lombrozo via bitcoin-dev wrote:

 In the entire history of Bitcoin we’ve never attempted anything even closely 
 resembling a hard fork like what’s being proposed here.

 Many of us have wanted to push our own hard-forking changes to the 
 protocol…and have been frustrated because of the inability to do so.

 This inability is not due to any malice on anyone’s part…it is a feature of 
 Satoshi’s protocol. For better or worse, it is *very hard* to change the 
 rules…and this is exactly what imbues Bitcoin with one of its most powerful 
 attributes: very well-defined settlement guarantees that cannot be suddenly 
 altered nor reversed by anyone.

 We’ve managed to have a few soft forks in the past…and for the most part 
 these changes have been pretty uncontroversial…or at least, they have not had 
 nearly the level of political divisiveness that this block size issue is 
 having. And even then, we’ve encountered a number of problems with these 
 deployments that have at times required goodwill cooperation between 
 developers and mining pool operators to fix.

 Again, we have NEVER attempted anything even remotely like what’s being 
 proposed - we’ve never done any sort of hard fork before like this. If even 
 fairly uncontroversial soft forks have caused problems, can you imagine the 
 kinds of potential problems that a hard fork over some highly polarizing 
 issue might raise? Do you really think people are going to want to 

 I can understand that some people would like bigger blocks. Other people 
 might want feature X, others feature Y…and we can argue the merits of this or 
 that to death…but the fact remains that we have NEVER attempted any hard 
 forking change…not even with a simple, totally uncontroversial no-brainer 
 improvement that would not risk any sort of ill-will that could hamper 
 remedies were it not to go as smoothly as we like. *THIS* is the fundamental 
 problem - the whole bigger block thing is a minor issue by comparison…it 
 could be any controversial change, really.

 Would you want to send your test pilots on their first flight…the first time 
 an aircraft is ever flown…directly into combat without having tested the 
 plane? This is what attempting a hard fork mechanism that’s NEVER been done 
 before in such a politically divisive environment basically amounts to…but 
 it’s even worse. We’re basically risking the entire air force (not just one 
 plane) over an argument regarding how many seats a plane should have that 
 we’ve never flown before.

 We’re talking billlions of dollars’ worth of other people’s money that is on 
 the line here. Don’t we owe it to them to at least test out the system on a 
 far less controversial, far less divisive change first to make sure we can 
 even deploy it without things breaking? I don’t even care about the merits 
 regarding bigger blocks vs. smaller blocks at this point, to be quite honest 
 - that’s such a petty thing compared to what I’m talking about here. If we 
 attempt a novel hard-forking mechanism that’s NEVER been attempted before 
 (and which as many have pointed out is potentially fraught with serious 
 problems) on such a politically divisive, polarizing issue, the result is 
 each side will refuse to cooperate with the other out of spite…and can easily 
 lead to a war, tanking the value of everyone’s assets on both chains. All so 
 we can process 8 times the number of transactions we currently do? Even if it 
 were 100 times, we wouldn’t even come close to touching big payment 
 processors like Visa. It’s hard to imagine a protocol improvement that’s 
 worth the risk.

 I urge you to at least try to see the bigger picture here…and to understand 
 that nobody is trying to stop anyone from doing anything out of some desire 
 for maintaining control - NONE of us are able to deploy hard forks right now 
 without facing these problems. And different people obviously have different 
 priorities and preferences as to which of these changes would be best to do 
 first. This whole XT thing is essentially giving *one* proposal special 
 treatment above those that others have proposed. Many of us have only held 
 back from doing this out of our belief 

Re: [bitcoin-dev] Bitcoin XT 0.11A

2015-08-16 Thread Adam Back via bitcoin-dev
Hi Tamas

Do you find BIP 101, BIP 102, BIP 103 and the flexcap proposal
deserving of equal consideration?  Just curious because of your post.

Will you be interested to participate in the BIP review process and
perhaps attend the workshop on Bitcoin scaling announced here


On 16 August 2015 at 17:07, Tamas Blummer via bitcoin-dev wrote:
 Being a bitcoin software developer an entrepreneur for years I learned that 
 success is not a direct consequence of technology and is not inevitable.
 BitcoinXT manifesto ( 
 should resonate with many fellow entrepreneurs.
 I applaud Mike and Gavin for creating that choice for us.

 Tamas Blummer

 bitcoin-dev mailing list

bitcoin-dev mailing list

Re: [bitcoin-dev] Fees and the block-finding process

2015-08-11 Thread Adam Back via bitcoin-dev
I dont think Bitcoin being cheaper is the main characteristic of
Bitcoin.  I think the interesting thing is trustlessness - being able
to transact without relying on third parties.


On 11 August 2015 at 22:18, Michael Naber via bitcoin-dev wrote:
 The only reason why Bitcoin has grown the way it has, and in fact the only
 reason why we're all even here on this mailing list talking about this, is
 because Bitcoin is growing, since it's better money than other money. One
 of the key characteristics toward that is Bitcoin being inexpensive to
 transact. If that characteristic is no longer true, then Bitcoin isn't going
 to grow, and in fact Bitcoin itself will be replaced by better money that is
 less expensive to transfer.

 So the importance of this issue cannot be overstated -- it's compete or die
 for Bitcoin -- because people want to transact with global consensus at high
 volume, and because technology exists to service that want, then it's going
 to be met. This is basic rules of demand and supply. I don't necessarily
 disagree with your position on only wanting to support uncontroversial
 commits, but I think it's important to get consensus on the criticality of
 the block size issue: do you agree, disagree, or not take a side, and why?

 On Tue, Aug 11, 2015 at 2:51 PM, Pieter Wuille

 On Tue, Aug 11, 2015 at 9:37 PM, Michael Naber via bitcoin-dev wrote:

 Hitting the limit in and of itself is not necessarily a bad thing. The
 question at hand is whether we should constrain that limit below what
 technology is capable of delivering. I'm arguing that not only we should
 not, but that we could not even if we wanted to, since competition will
 deliver capacity for global consensus whether it's in Bitcoin or in some
 other product / fork.

 The question is not what the technology can deliver. The question is what
 price we're willing to pay for that. It is not a boolean at this size,
 things break, and below it, they work. A small constant factor increase
 will unlikely break anything in the short term, but it will come with higher
 centralization pressure of various forms. There is discussion about whether
 these centralization pressures are significant, but citing that it's
 artificially constrained under the limit is IMHO a misrepresentation. It is
 constrained to aim for a certain balance between utility and risk, and
 neither extreme is interesting, while possibly still working.

 Consensus rules are what keeps the system together. You can't simply
 switch to new rules on your own, because the rest of the system will end up
 ignoring you. These rules are there for a reason. You and I may agree about
 whether the 21M limit is necessary, and disagree about whether we need a
 block size limit, but we should be extremely careful with change. My
 position as Bitcoin Core developer is that we should merge consensus changes
 only when they are uncontroversial. Even when you believe a more invasive
 change is worth it, others may disagree, and the risk from disagreement is
 likely larger than the effect of a small block size increase by itself: the
 risk that suddenly every transaction can be spent twice (once on each side
 of the fork), the very thing that the block chain was designed to prevent.

 My personal opinion is that we should aim to do a block size increase for
 the right reasons. I don't think fear of rising fees or unreliability should
 be an issue: if fees are being paid, it means someone is willing to pay
 them. If people are doing transactions despite being unreliable, there must
 be a use for them. That may mean that some use cases don't fit anymore, but
 that is already the case.


 bitcoin-dev mailing list

bitcoin-dev mailing list

Re: [bitcoin-dev] Fees and the block-finding process

2015-08-11 Thread Adam Back via bitcoin-dev
I think everyone is expending huge effort on design, analysis and
implementation of the lowest cost technology for Bitcoin.

Changing parameters doesnt create progress on scalability fundamentals -
there really is an inherent cost and security / throughput tradeoff to
blockchains.  Security is quite central to this discussion.  It is
unrealistic in my opinion to suppose that everything can fit directly
on-chain in the fullest Bitcoin adoption across cash-payments, internet of
things, QoS, micropayments, share-trading, derivates etc.  Hence the
interest in protocols like lightning (encourage you and others to read the
paper, blog posts and implementation progress on the lightning-dev mailing

Mid-term different tradeoffs can happen that are all connected to and
building on Bitcoin.  But whatever technologies win out for scale, they all
depend on Bitcoin security - anything built on Bitcoin requires a secure
base.  So I think it is logical that we strive to maintain and improve
Bitcoin security.  Long-term tradeoffs that significantly weaken security
for throughput or other considerations should be built on top of Bitcoin,
and avoiding creating a one-size fits all unfortunate compromise that
weakens Bitcoin to the lowest common denominator of centralisation,
insecurity and throughput tradeoffs.  This pattern (secure base, other
protocols built on top) is already the status quo - probably  99% of
Bitcoin transactions are off-chain already (in exchanges, web wallets
etc).  And there are various things that can and are being done to improve
the security of those solutions, with provable reserves, periodic on-chain
settlement, netting, lightning like protocols and other things probably
still to be invented.

Some of the longer term things we probably dont know yet, but the future is
NOT bleak.  Lots of scope for technology improvement.


On 11 August 2015 at 20:26, Michael Naber via bitcoin-dev wrote:

 All things considered, if people want to participate in a global consensus
 network, and the technology exist to do it at a lower cost, then is it
 sensible or even possible to somehow arbitrarily set the price of
 participating in a global consensus network to be expensive? Can someone
 please walk me through how that's expected to play out because I'm really
 having a hard time understanding how it could work.

 On Tue, Aug 11, 2015 at 2:00 PM, Mark Friedenbach via bitcoin-dev wrote:

 More people using Bitcoin does not necessarily mean more transactions
 being processed by the block chain. Satoshi was forward-thinking enough to
 include a powerful script-signature system, something which has never
 really existed before. Though suffering from some limitations to be sure,
 this smart contract execution framework is expressive enough to enable a
 wide variety of new features without changing bitcoin itself.

 One of these invented features is micropayment channels -- the ability
 for two parties to rapidly exchange funds while only settling the final
 balance to the block chain, and to do so in an entirely trustless way.
 Right now people don't use scripts to do interesting things like this, but
 there is absolutely no reason why they can't. Lightning network is a vision
 of a future where everyone uses a higher-layer protocol for their
 transactions which only periodically settle on the block chain. It is
 entirely possible that you may be able to do all your day-to-day
 transactions in bitcoin yet only settle accounts every other week, totaling
 13kB per year. A 1MB block could support that level of usage by 4 million
 people, which is many orders of magnitude more than the number of people
 presently using bitcoin on a day to day basis.

 And that, by the way, is without considering as-yet uninvented
 applications of existing or future script which will provide even further
 improvements to scale. This is very fertile ground being explored by very
 few people. One thing I hope to come out of this block size debate is a lot
 more people (like Joseph Poon) looking at how bitcoin script can be used to
 enable new and innovative resource-efficient and privacy-enhancing payment

 The network has room to grow. It just requires wallet developers and
 other infrastructure folk to step up to the plate and do their part in
 deploying this technology.

 On Tue, Aug 11, 2015 at 2:14 AM, Angel Leon wrote:

 - policy neutrality.
 - It can't be censored.
 - it can't be shut down
 - and the rules cannot change from underneath you.

 except it can be shutdown the minute it actually gets used by its
 inability to scale.

 what's the point of having all this if nobody can use it?
 what's the point of going through all that energy and CO2 for a mere
 24,000 transactions an hour?

 It's clear that it's just a matter of time before it collapses.

 Here's a simple proposal (concept) that doesn't pretend to 

Re: [bitcoin-dev] What Lightning Is

2015-08-10 Thread Adam Back via bitcoin-dev
In terms of usage I think you'd more imagine a wallet that basically
parks Bitcoins onto channels at all times, so long as they are
routable there is no loss, and the scalability achieved thereby is
strongly advantageous, and there is even the potential for users to
earn fees by having their wallets participate in channel rebalancing
(where hubs pay users to rebalance channels - end up with the same net
position but move funds from one user-owned channel to another.)
Exchange deposit, withdrawal, payments, even in-exchange trades can
usefully happen in lightning for faster, cheaper more scalable

bitcoin-dev mailing list

Re: [bitcoin-dev] trust

2015-08-08 Thread Adam Back via bitcoin-dev
If you are saying that some people are happy trusting other people,
and so would be perfectly fine with off-chain use of Bitcoin, then we
agree and I already said that off-chain use case would be a
constructive thing for someone to improve scale and interoperability
of in the post you are replying to.  However that use case is not a
strong argument for weakening Bitcoin's security to get to more scale
for that use case.

In a world where we could have scale and decentralisation, then of
course it would be nice to provide people with that outlook more
security than they seem to want.  And sometimes people dont understand
why security is useful until it goes wrong, so it would be a useful
thing to do.  (Like insurance, your money being seized by paypal out
of the blue etc).  And indeed providing security at scale maybe
possible with lightning like protocols that people are working on.

bitcoin-dev mailing list

Re: [bitcoin-dev] trust

2015-08-08 Thread Adam Back via bitcoin-dev
On 8 August 2015 at 09:54, Thomas Zander wrote:
 I didn't say off-chain, and gave an example of on-chain usecase with trusted 

That's basically the definition of off-chain.  When we say MtGox or
coinbase etc are off-chain transactions, that is because a middle man
has the private keys to the coins and gave you an IOU of some kind.

bitcoin-dev mailing list

Re: [bitcoin-dev] Fees and the block-finding process

2015-08-07 Thread Adam Back via bitcoin-dev
Please try to focus on constructive technical comments.

On 7 August 2015 at 23:12, Thomas Zander via bitcoin-dev wrote:
 What will the backlash be when people here that are pushing for off-chain-
 transactions fail to produce a properly working alternative, which
 essentially means we have to say NO to more users.

But  99% of Bitcoin transactions are already off-chain.  There are
multiple competing companies offering consumer  retail service with
off-chain settlement.

I wasnt clear but it seemed in your previous mail that you seemed to
say you dont mind trusting other people with your money, and so
presumably you are OK using these services, and so have no problem?

 At this time and this size of bitcoin community, my personal experience (and
 I've been part of many communities) saying NO to new customers

Who said no to anything?  The systems of off-chain transfer already
exist and are by comparison to Bitcoins protocol simple and rapid to
adapt and scale.

Indications are that we can even do off-chain at scale with Bitcoin
similar trust-minimisation with lightning, and duplex payment
channels; and people are working on that right now.

I think it would be interesting and useful for someone, with an
interest in low trust, high scale transactions, to work on and propose
an interoperability standard and API for such off-chain services to be
accessed by wallets, and perhaps periodic on-chain inter-service

bitcoin-dev mailing list

Re: [bitcoin-dev] A Transaction Fee Market Exists Without a Block Size Limit--new research paper suggests

2015-08-05 Thread Adam Back via bitcoin-dev
On 5 August 2015 at 11:18, Hector Chu via bitcoin-dev wrote:
 Miners would be uniquely placed to know how best to vary the block size to 
 maximize their
 profit resulting from these two prices. [...]
 In that respect a dynamic block size voted on by miners periodically would
 go some way to rectify this inefficiency.

This kind of thing has been discussed here, even recently.  It is not
without problems.

You may find the flexcap idea summarised in outline by Greg Maxwell
and Mark Friedenbach a month or so back interesting in showing that
one can achieve such effects without handing over a free vote to
miners and hence avoid many (though probably not all) of the
side-effects inherent in giving miners control.

About side-effects, I think we can make argument that there are limits
because other than in an extremis sense, miners are not necessarily in
alignment with security, nor maximising user utility and value

For example switching cost economics are common in networks (cell
phone service pricing), maybe Bitcoin would have a really high
switching cost if miners would cartelise.

Also miners are in a complex game competing with each other, and this
degree of control risks selfish mining issues or other cartel attacks
or bandwidth/verification/latency related attacks being made worse.
eg see the recent paper by Aviv Zohar.

Generally speaking economically dependent full nodes are holding
miners honest by design.  Changing that dynamic by shifting influence
can have security and control impacting side-effects, and needs to be
thought about carefully.

About security to try to make those comparisons a bit more formal I
posted this taxonomy of types of security that proposals could be
compared against, in order of security:

1. consensus rule (your block is invalid if you attack)
2. economic incentive alignment (you tend to lose money if you attack)
3. overt (attack possible but detectable, hence probably less likely
to happen for reputation or market signal reasons even if possible
4. meta incentive (assume people would not attack if they have an
investment or interest in seeing Bitcoin continue to succeed)

bitcoin-dev mailing list

Re: [bitcoin-dev] A Transaction Fee Market Exists Without a Block Size Limit--new research paper suggests

2015-08-05 Thread Adam Back via bitcoin-dev
On 5 August 2015 at 12:51, Hector Chu wrote:
 The market I am thinking of would be open to all, not just miners. But
 miners would probably be best placed to profit from such a market, as it is
 their business to know about the revenue/costs tradeoff.

This prediction market in block-size seems like something extremely
complex to operate and keep secure in a decentralised fashion.  There
are several experimental projects right now trying to figure out how
to do this securely, using blockchain ideas, but it is early days for
those projects.

We also have no particular reason to suppose other than
meta-incentive, that it should result in a secure parameter set.

I suspect that, while it is interesting in the abstract, it risks
converting a complex security problem into an even more complex one,
rather than constituting an incremental security improvement which is
more the context of day to day discussions here.

bitcoin-dev mailing list

Re: [bitcoin-dev] A reason we can all agree on to increase block size

2015-08-03 Thread Adam Back via bitcoin-dev
Again this should not be a political or business compromise model - we
must focus on scientific evaluation, technical requirements and

But specifically as you asked a group of Chinese miners said they
would not run it:

Imagine if we had a nuclear reactor design criteria - we would not be
asking around with companies what parameter would they compromise on.
We'd be looking to scientific analysis of what is safe, based on
empirical and theoretical work on safety.  If we're risking $4b of
other peoples money (and a little bit of mine) I would strongly want a
scientific approach.

A closer analogy would be the NIST SHA3 design process.  With crypto
building blocks it is a security / speed tradeoff, a little analogous
to the security / throughput trade off in Bitcoin.

They do not ask companies or governments which algorithm they like or
what parameter they'd compromise on.  They have a design competition
and analyse the algorithms and parameters for security margin and
speed optimisation in hardware and software.  Much effort is put in
and it is very rigorous because a lot is at stake if they get it


On 3 August 2015 at 09:34, Hector Chu wrote:
 On 3 August 2015 at 08:16, Simon Liu via bitcoin-dev wrote:

 Increasing the block size shouldn't be a problem for Chinese miners.
 Five of the largest - F2Pool, Antpool, BW, BTCChina, Huobi - have
 already signed a draft agreement indicating they are fine with an
 increase to 8 MB:

 What's the current stance of the Chinese pools on Bitcoin XT, should Bitcoin
 Core refuse to increase the block size to 8 MB in a timely fashion? Would
 they run it if the economic majority (e.g. Coinbase, Bitpay, etc.) publicly
 stated their support for big blocks?
bitcoin-dev mailing list

Re: [bitcoin-dev] A compromise between BIP101 and Pieter's proposal

2015-07-31 Thread Adam Back via bitcoin-dev
That's all well and fine.  But the pattern of your argument I would
say is arguing security down ie saying something is not secure
anyway, nothing is secure, everything could be hacked, so lets forget
that and give up, so that what is left is basically no
decentralisation security.

It is not paranoid to take decentralisation security seriously, it is
necessary because it is critical to Bitcoin.  Security in depth
meaning take what security you can get from available defences.


On 31 July 2015 at 15:07, wrote:
 Yes, data-center operators are bound to follow laws, including NSLs and gag
 orders. How about your ISP? Is it bound to follow laws, including NSLs and
 gag orders?

 Do you think everyone should run a full node behind TOR? No way, your
 repressive government could just block TOR:

 Or they could raid your home and seize your Raspberry Pi if they couldn't
 read your encrypted internet traffic. You will have a hard time proving you
 are not using TOR for child porn or cocaine.

 If you are living in a country like this, running Bitcoin in an offshore VPS
 could be much easier. Anyway, Bitcoin shouldn't be your first thing to worry
 about. Revolution is probably your only choice.

 Data-centers would get hacked. How about your Raspberry Pi?

 Corrupt data-center employee is probably the only valid concern. However,
 there is nothing (except cost) to stop you from establishing multiple full
 nodes all over the world. If your Raspberry Pi at home could no longer fully
 validate the chain, it could become a header-only node to make sure your VPS
 full nodes are following the correct chaintip. You may even buy hourly
 charged cloud hosting in different countries to run header-only nodes at
 negligible cost.

 There is no single point of failure in a decentralized network. Having
 multiple nodes will also save you from Sybil attack and geopolitical risks.
 Again, if all data-centres and governments in the world are turning against
 Bitcoin, it is delusional to think we could fight against them without using
 any real weapon.

 By the way, I'm quite confident that my current full node at home are
 capable of running at 8MB blocks.

 Quoting Adam Back

 I think trust the data-center logic obviously fails, and I was talking
 about this scenario in the post you are replying to.  You are trusting the
 data-center operator period.  If one could trust data-centers to run
 verified code, to not get hacked, filter traffic, respond to court orders
 without notifying you etc that would be great but that's unfortunately not
 what happens.

 Data-center operators are bound to follow laws, including NSLs and gag
 orders.  They also get hacked, employ humans who can be corrupt,
 blackmailed, and themselves centralisation points for policy attack.
 Snowden related disclosures and keeping aware of security show this is

 This isn't much about bitcoin even, its just security reality for hosting
 anything intended to be secure via decentralisation, or just hosting in
 general while at risk of political or policy attack.


bitcoin-dev mailing list

Re: [bitcoin-dev] Block size following technological growth

2015-07-30 Thread Adam Back via bitcoin-dev
That's what is nice about proposals, they are constructive and help
move the conversation forward!

On 30 July 2015 at 18:20, Gavin Andresen via bitcoin-dev wrote:
 Specific comments:

 So we'd get to 2MB blocks in the year 2021. I think that is much too
 conservative, and the most likely effect of being that conservative is that
 the main blockchain becomes a settlement network, affordable only for
 large-value transactions.

But, if we agree that 17%/year is consistent with network
improvements, by arguing this is too conservative, does that not mean
you are actually going beyond suggesting throughput increases to
benefit from bandwidth improvements, and explicitly arguing to
borrowing from Bitcoin's already very weak decentralisation to create
more throughput?  (Or arguing to subsidise transaction fees if
borrowing so deeply that excess capacity pushes beyond demand).

I think the logical implication of this would be that we should be
first focussing on improving decentralisation, to make security room
to reclaim extra throughput.

(To be clear there are concrete things that can be done and actually
are being done to improve decentralisation via ecosystem education and
mining protocol improvements, but it's safer to wait a few months and
see if those things take effect well).

Secondly in this assumption are you considering that lightning will
likely be online for many years by 2021 and the situation will be
hugely different?

I think an incremental and conservative approach is safer.  People can
probably get a lightning prototype running about as fast as a
hard-fork could be safely rolled out.

I do think it is normal to be conservative with security and with $4b
of other peoples money.  It's no longer an experimental system to
consider fail fast experiments on.

 I don't think your proposal strikes the right balance between centralization
 of payments (a future where only people running payment hubs, big merchants,
 exchanges, and wallet providers settle on the blockchain) and centralization
 of mining.

What criteria should we be using in your opinion to balance?  I think
throughput increases trading off decentralisation would be more
reasonable if decentralisation wasnt in very bad shape.

bitcoin-dev mailing list

Re: [bitcoin-dev] Why Satoshi's temporary anti-spam measure isn'ttemporary

2015-07-29 Thread Adam Back via bitcoin-dev
I dont think people consider other blockchains as a competitive
threat.  A PoW-blockchain is a largely singleton data structure for
security reasons (single highest hashrate), it is hard for an
alternative chain to bootstrap or provide meaningful security.
Secondly the world largely lacks expertise to maintain a blockchain to
bitcoin's security level, perhaps you can see a hint of this in the
recently disclosed security vulnerability by Pieter Wuille and Gregory
Maxwell.  Calls to this as an argument are not resonating and probably
not helping your argument.  Bitcoin has security properties, and a
competing system cant achieve better properties by bypassing security,
any blockchain faces the same fundamental security / decentralisation

Secondly Bitcoin can obviously compete with itself with different
parameters and defacto *does* today.  I think it is a safe estimate
that  99% of Bitcoin transactions right now are happening in Bitcoin
related systems with various degrees of audit, reconciliation,
provable reserves etc.  I think we can expect this to continue and
become more secure via more reconciliation, and longer term via
lightning or Bitcoin sidechains with different parameters.  It is a
different story to have a single central system (Bitcoin with
parameters changed to the point of centralisation failure) vs having
multiple choices, because some transactions can more easily use
relatively centralised systems (eg micropayments), and more
interestingly the combination of a secure and decentralised layer 1
plus choices of less decentralised layer 2 options, can be interesting
because the layer 2 is provided cover from attack.  There is less to
be gained by attacking relatively centralised layer 2 because any
payments at risk of policy abuse (which is typically a small subset)
can easily switch to layer 1.  That in itself makes layer 2
transactions also less susceptible to policy abuse.  Further lightning
it appears from work so far should add significant scale while
retaining trustlessness and a good degree of decentralisation.

Finally you seem to be focusing on artificial limits where that is
not the issue under consideration.  The limits are technical and
relating to decentralisation and security.  I wont go over them again
as this topic has been covered many times in recent months.  Any chain
that tried to go to extreme parameters (very low block intervals, or
very large blocksizes) would have the same decentralisation problems
as Bitcoin would if it did the same thing.  There are a number of alt
coins that have failed as a result of poor parameter choices, there
are inherent security limits.


ps Etiquette note for yourself and others: please dont be repetitive
or attempt to be forceful.  Many people have spent many years
understanding this very complex system, from my own experience it is
rare indeed to think of an entirely new concept or analysis, that
hasnt' been long considered and put to bed 3 or 4 years ago.
Thoughtful polite and constructive comments are welcome but I
recommend to not start from an assumption that you have a clear and
better insight than the entire technical community, because I have to
say from my own experience that is very rarely the case.  It can be
useful to test theories on #bitcoin IRC channel to find out what has
been already concluded, find the references and avoid having to have
that hashed out on this list which is trying to be focussed on
technical solutions.

On 29 July 2015 at 16:10, Raystonn . via bitcoin-dev wrote:
 Cheapest way to send value? Is this what Bitcoin is trying to do? So
 all of the smart contract, programmable money, consensus coding and
 tremendous developer effort is bent to the consumer demand for cheaper
 fees. Surely thou jests!

 These other features can be replicated into any alternative blockchain,
 including those with lower fees.  In the open-source world of
 cryptocurrency, no feature will remain a value-add for very long after it
 has been identified to be such.  Anything adding value will quickly be
 absorbed into competing alternative blockchains.  That will leave economic
 policy as the distinguishing factor.

 ... it is not the case ... that reluctance to concede
 blocksize is an attempt to constrain capacity. Greg Maxwell thoroughly
 explained in this thread that the protocol's current state of
 development relies on  blocksize for security and, ultimately, as a
 means of protecting its degree of decentralization.

 A slow or lack of increase to maximum transaction rate will cause pressure
 on fees.  Whether this is the desired goal is not relevant.  Everyone has
 agreed this will be the outcome.  As to a smaller block size being needed
 for additional decentralization, one must simply ask how much we are all
 willing to pay for that additional decentralization.  It is likely that the
 benefit thereto will have to be demonstrated by some power attacking and

Re: [bitcoin-dev] Răspuns: Personal opinion on the fee market from a worried local trader

2015-07-29 Thread Adam Back via bitcoin-dev
On 29 July 2015 at 20:41, Ryan Butler via bitcoin-dev wrote:
 Does an unlimited blocksize imply the lack of a fee market?  Isn't every
 miner able to set their minimum accepted fee or transaction acceptance

The assumption is that wont work because any miner can break ranks and
do so profitably, so to expect otherwise is to expect oligopoly
behaviour which is the sort of antithesis of a decentralised mining
system.  It's in fact a similar argument as to why decentralisation of
mining provides policy neutrality: some miner somewhere with some
hashrate will process your transaction even if some other miners are
by policy deciding not to mine it.  It is also similar reason why free
transactions are processed today - policies vary and this is good for
ensuring many types of transaction get processed.

bitcoin-dev mailing list

Re: [bitcoin-dev] Răspuns: Personal opinion on the fee market from a worried local trader

2015-07-29 Thread Adam Back via bitcoin-dev
btw the fact that mining is (or can be) anonymous also makes oligopoly
or cartel behaviour likely unstable.  Miners can break ranks and
process transactions others wish to block, or with lower fees than a
cartel would like to charge, without detection.

Anonymous mining is a feature and helps ensure policy neutrality.

This is all overlaid by the 51% attack - if a coherent cartel arose
that could maintain 51% and had enough mutual self-interest to make
that stable, they could attack miners bypassing their cartel policies,
by orphaning their blocks.  This is partly why mining decentralisation
is important.  Also that is an overt act which is very detectable and
could lead to technical counter-measures by the users, who are in
ultimately in control of the protocol.  So there is some game theory
suggesting it would be inadvisable for miners to be overt in cartel
attacks.  Non overt attacks cant prevent anonymous under cutting of
cartel desired fee minimums.


On 29 July 2015 at 21:00, Adam Back wrote:
 On 29 July 2015 at 20:41, Ryan Butler via bitcoin-dev wrote:
 Does an unlimited blocksize imply the lack of a fee market?  Isn't every
 miner able to set their minimum accepted fee or transaction acceptance

 The assumption is that wont work because any miner can break ranks and
 do so profitably, so to expect otherwise is to expect oligopoly
 behaviour which is the sort of antithesis of a decentralised mining
 system.  It's in fact a similar argument as to why decentralisation of
 mining provides policy neutrality: some miner somewhere with some
 hashrate will process your transaction even if some other miners are
 by policy deciding not to mine it.  It is also similar reason why free
 transactions are processed today - policies vary and this is good for
 ensuring many types of transaction get processed.

bitcoin-dev mailing list