Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-07 Thread Anthony Towns via bitcoin-dev
On Tue, Dec 08, 2015 at 05:21:18AM +, Gregory Maxwell via bitcoin-dev wrote:
> On Tue, Dec 8, 2015 at 4:58 AM, Anthony Towns via bitcoin-dev
>  wrote:
> > Having a cost function rather than separate limits does make it easier to
> > build blocks (approximately) optimally, though (ie, just divide the fee by
> > (base_bytes+witness_bytes/4) and sort). Are there any other benefits?
> Actually being able to compute fees for your transaction: If there are
> multiple limits that are "at play" then how you need to pay would
> depend on the entire set of other candidate transactions, which is
> unknown to you.

Isn't that solvable in the short term, if miners just agree to order
transactions via a cost function, without enforcing it at consensus
level until a later hard fork that can also change the existing limits
to enforce that balance?

(1MB base + 3MB witness + 20k sigops) with segwit initially, to something
like (B + W + 200*U + 40*S < 5e6) where B is base bytes, W is witness
bytes, U is number of UTXOs added (or removed) and S is number of sigops,
or whatever factors actually make sense.

I guess segwit does allow soft-forking more sigops immediately -- segwit
transactions only add sigops into the segregated witness, which doesn't
get counted for existing consensus. So it would be possible to take the
opposite approach, and make the rule immediately be something like:

  50*S < 1M
  B + W/4 + 25*S' < 1M

(where S is sigops in base data, and S' is sigops in witness) and
just rely on S trending to zero (or soft-fork in a requirement that
non-segregated witness transactions have fewer than B/50 sigops) so that
there's only one (linear) equation to optimise, when deciding fees or
creating a block. (I don't see how you could safely set the coefficient
for S' too much smaller though)

B+W/4+25*S' for a 2-in/2-out p2pkh would still be 178+206/4+25*2=280
though, which would allow 3570 transactions per block, versus 2700 now,
which would only be a 32% increase...

> These don't, however, apply all that strongly if only one limit is
> likely to be the limiting limit... though I am unsure about counting
> on that; after all if the other limits wouldn't be limiting, why have
> them?

Sure, but, at least for now, there's already two limits that are being
hit. Having one is *much* better than two, but I don't think two is a
lot better than three?

(Also, the ratio between the parameters doesn't necessary seem like a
constant; it's not clear to me that hardcoding a formula with a single
limit is actually better than hardcoding separate limits, and letting
miners/the market work out coefficients that match the sort of contracts
that are actually being used)

> > That seems kinda backwards.
> It can seem that way, but all limiting schemes have pathological cases
> where someone runs up against the limit in the most costly way. Keep
> in mind that casual pathological behavior can be suppressed via
> IsStandard like rules without baking them into consensus; so long as
> the candidate attacker isn't miners themselves. Doing so where
> possible can help avoid cases like the current sigops limiting which
> is just ... pretty broken.

Sure; it just seems to be halving the increase in block space (60% versus
100% extra for p2pkh, 100% versus 200% for 2/2 multisig p2sh) for what
doesn't actually look like that much of a benefit in fee comparisons?

I mean, as far as I'm concerned, segwit is great even if it doesn't buy
any improvement in transactions/block, so even a 1% gain is brilliant.
I'd just rather the 100%-200% gain I was expecting. :)

Cheers,
aj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-07 Thread Gregory Maxwell via bitcoin-dev
On Tue, Dec 8, 2015 at 4:58 AM, Anthony Towns via bitcoin-dev
 wrote:
> Having a cost function rather than separate limits does make it easier to
> build blocks (approximately) optimally, though (ie, just divide the fee by
> (base_bytes+witness_bytes/4) and sort). Are there any other benefits?

Actually being able to compute fees for your transaction: If there are
multiple limits that are "at play" then how you need to pay would
depend on the entire set of other candidate transactions, which is
unknown to you. Avoiding the need for a fancy solver in the miner is
also virtuous, because requiring software complexity there can make
for centralization advantages or divert development/maintenance cycles
in open source software off to other ends... The multidimensional
optimization is harder to accommodate for improved relay schemes, this
is the same as the "build blocks" but much more critical both because
of the need for consistency and the frequency in which you do it.

These don't, however, apply all that strongly if only one limit is
likely to be the limiting limit... though I am unsure about counting
on that; after all if the other limits wouldn't be limiting, why have
them?

> That seems kinda backwards.

It can seem that way, but all limiting schemes have pathological cases
where someone runs up against the limit in the most costly way.  Keep
in mind that casual pathological behavior can be suppressed via
IsStandard like rules without baking them into consensus; so long as
the candidate attacker isn't miners themselves. Doing so where
possible can help avoid cases like the current sigops limiting which
is just ... pretty broken.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-07 Thread Anthony Towns via bitcoin-dev
> On Mon, Dec 07, 2015 at 10:02:17PM +, Gregory Maxwell wrote:
> > If widely used this proposal gives a 2x capacity increase
> > (more if multisig is widely used),

So from IRC, this doesn't seem quite right -- capacity is constrained as

  base_size + witness_size/4 <= 1MB

rather than

  base_size <= 1MB and base_size + witness_size <= 4MB

or similar. So if you have a 500B transaction and move 250B into the
witness, you're still using up 250B+250B/4 of the 1MB limit, rather than
just 250B of the 1MB limit.

In particular, if you use as many p2pkh transactions as possible, you'd
have 800kB of base data plus 800kB of witness data, and for a block
filled with 2-of-2 multisig p2sh transactions, you'd hit the limit at
670kB of base data and 1.33MB of witness data.

That would be 1.6MB and 2MB of total actual data if you hit the limits
with real transactions, so it's more like a 1.8x increase for real
transactions afaics, even with substantial use of multisig addresses.

The 4MB consensus limit could only be hit by having a single trivial
transaction using as little base data as possible, then a single huge
4MB witness. So people trying to abuse the system have 4x the blocksize
for 1 block's worth of fees, while people using it as intended only get
1.6x or 2x the blocksize... That seems kinda backwards.

Having a cost function rather than separate limits does make it easier to
build blocks (approximately) optimally, though (ie, just divide the fee by
(base_bytes+witness_bytes/4) and sort). Are there any other benefits?

But afaics, you could just have fixed consensus limits and use the cost
function for building blocks, though? ie sort txs by fee divided by [B +
S*50 + W/3] (where B is base bytes, S is sigops and W is witness bytes)
then just fill up the block until one of the three limits (1MB base,
20k sigops, 3MB witness) is hit?

(Doing a hard fork to make *all* the limits -- base data, witness data,
and sigop count -- part of a single cost function might be a win; I'm
just not seeing the gain in forcing witness data to trade off against
block data when filling blocks is already a 2D knapsack problem)

Cheers,
aj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-07 Thread Anthony Towns via bitcoin-dev
On Mon, Dec 07, 2015 at 10:02:17PM +, Gregory Maxwell via bitcoin-dev wrote:
> ... bringing Segregated Witness to Bitcoin.
> The particular proposal amounts to a 4MB blocksize increase at worst.

Bit ambiguous what "worst" means here; lots of people would say the
smallest increase is the worst option. :)

By my count, P2PKH transactions get 2x space saving with segwit [0],
while 2-of-2 multisig P2SH transactions (and hence most of the on-chain
lightning transactions) get a 3x space saving [1]. An on-chain HTLC (for
a cross-chain atomic swap eg) would also get 3x space saving [2]. The most
extreme lightning transactions (uncooperative close with bonus anonymity)
could get a 6x saving, but would probably run into SIGOP limits [3].

> If widely used this proposal gives a 2x capacity increase
> (more if multisig is widely used),

So I think it's fair to say that on its own it gives up to a 2x increase
for ordinary pay to public key transactions, and a 3x increase for 2/2
multisig and (on-chain) lightning transactions (which would mean lightning
could scale to ~20M users with 1MB block sizes based on the estimates
from Tadge Dryja's talk). More complicated smart contracts (even just 3
of 5 multisig) presumably benefit even more from this, which seems like
an interesting approach to (part of) jgarzik's "Fidelity problem".

Averaging those numbers as a 2.5x improvement, means that combining
segwit with other proposals would allow you to derate them by a factor
of 2.5, giving:

 BIP-100: maximum of 12.8MB
 BIP-101: 3.2MB in 2016, 6.4MB in 2018, 12.8MB in 2020, 25.6MB in 2022..
 2-4-8:   800kB in 2016, 1.6MB in 2018, 3.2MB in 2020
 BIP-103: 400kB in 2016, 470kB in 2018, 650kB in 2020, 1MB in 2023...

(ie, if BIP-103 had been the "perfect" approach, then post segwit,
it would make sense to put non-consensus soft-limits back in place
for quite a while)

> TL;DR:  I propose we work immediately towards the segwit 4MB block
> soft-fork which increases capacity and scalability, and recent speedups
> and incoming relay improvements make segwit a reasonable risk.

I guess segwit effectively introduces two additional dimensions for
working out how to optimally pack transactions into a block -- there's
the existing constraints on block bytes (<=1MB) and sigops (<=20k), but
there are problably additional constraints on witness bytes (<=3MB) and
there *could* be a different constraint for sigops in witnesses (<=3*20k?
<=4*20k?) compared to sigops in the block while remaining a soft-fork.

It could also be an opportunity to combine the constraints, ie
(segwit_bytes + 50*segwit_sigs < 6M) which would make it easier to avoid
attacks where people try sending transactions with lots of sigops in very
few bytes, filling up blocks by sigops, but only paying fees proportional
to their byte count.

Hmm, after a quick look, I'm not sure if the current segwit branch
actually accounts for sigops in segregated witnesses? If it does, afaics
it simply applies the existing 20k limit to the total, which seems
too low to me?

Having segwit with the current 1MB limit on the traditional block
contents plus an additional 3MB for witness data seems like it would
also give a somewhat gradual increase in transaction volume from the
current 1x rate to an eventual 2x or 3x rate as wallet software upgrades
to support segregated witness transactions. So if problems were found
when block+witness data hit 1.5MB, there'd still be time to roll out
fixes before it got to 1.8MB or 2MB or 3MB. ie this further reduces the
risk compared to a single step increase to 2x capacity.

BTW, it's never been quite clear to me what the risks are precisely.
Here are some:

 - sometime soon, blockchain supply can't meet demand

+ I've never worked out how you'd tell if this is the case;
  there's potentially infinite demand if everything free, so at
  one level it's trivially true, but that's not helpful.

+ Presumably if this were happening in a way that "matters", fees
  would rise precipitously. Perhaps median fees of $2 USD/kB would
  indicate this is happening? If so, it's not here yet and seems
  like it's still a ways off.

+ If it were happening, then, presumably, people become would be
  less optimistic about bitcoin and the price of BTC would drop/not
  rise, but that seems pretty hard to interpret.

 - it becomes harder to build on blocks found by other miners,
   encouraging mining centralisation (which then makes censorship easier,
   and fungibility harder) or forcing trust between miners (eg SPV mining
   empty blocks)

+ latency/bandwidth limitations means miners can't get block
  information quickly enough (mitigated by weak blocks and IBLT)

+ blocks can't be verified quickly enough (due to too many crypto
  ops per block, or because the UTXO set can't be kept in RAM)
  (mitigated by libsecp256k1 improvements, ..?)

+ constructing a new block to mine takes too long

 - it becomes harder to m

[bitcoin-dev] Coalescing Transactions BIP Draft

2015-12-07 Thread Chris Priest via bitcoin-dev
I made a post a few days ago where I laid out a scheme for
implementing "coalescing transactions" using a new opcode. I have
since come to the realization that an opcode is not the best way to do
this. A much better approach I think is a new "transaction type" field
that is split off from the version field. Other uses can come out of
this type field, wildcard inputs is just the first one.

There are two unresolved issues. First, there might need to be a limit
on how many inputs are included in the "coalesce". Lets say you have
an address that has 100,000,000 inputs. If you were to coalesce them
all into one single input, that means that every node has to count of
these 100,000,000 inputs, which could take a long time. But then
again, the total number of inputs a wildcard can cover is limited to
the actual number of UTXOs in the pool, which is very much a
finite/constrained number.

One solution is to limit all wildcard inputs to, say, 10,000 items. If
you have more inputs that you want coalesced, you have to do it in
10,000 chunks, starting from the beginning. I want wildcard inputs to
look as much like normal inputs as much as possible to facilitate
implementation, so embedding a "max search" inside the transaction I
don't think is the best idea. I think if there is going to be a limit,
it should be implied.

The other issue is with limiting wildcard inputs to only inputs that
are confirmed into a fixed number of blocks. Sort of like how coinbase
has to be a certain age before it can be spent, maybe wildcard inputs
should only work on inputs older than a certain block age. Someone
brought up in the last thread that re-orgs can cause problems. I don't
quite see how that could happen, as re-orgs don't really affect
address balances, only block header values, which coalescing
transactions have nothing to do with.

Here is the draft:
https://github.com/priestc/bips/blob/master/bip-coalesc-wildcard.mediawiki
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-07 Thread Bryan Bishop via bitcoin-dev
On Mon, Dec 7, 2015 at 4:02 PM, Gregory Maxwell wrote:
> The Scaling Bitcoin Workshop in HK is just wrapping up. Many fascinating
> proposals were presented. I think this would be a good time to share my
> view of the near term arc for capacity increases in the Bitcoin system. I
> believe we’re in a fantastic place right now and that the community
> is ready to deliver on a clear forward path with a shared vision that
> addresses the needs of the system while upholding its values.

ACK.

One of the interesting take-aways from the workshops for me has been
that there is a large discrepancy between what developers are doing
and what's more widely known. When I was doing initial research and
work for my keynote at the Montreal conference (
http://diyhpl.us/~bryan/irc/bitcoin/scalingbitcoin-review.pdf -- an
attempt at being exhaustive, prior to seeing the workshop proposals ),
what I was most surprised by was the discrepancy between what we think
is being talked about versus what has been emphasized or socially
processed (lots of proposals appear in text, but review efforts are
sometimes "hidden" in corners of github pull request comments, for
example). As another example, the libsecp256k1 testing work reached a
level unseen except perhaps in the aerospace industry, but these sorts
of details are not apparent if you are reading bitcoin-dev archives.
It is very hard to listen to all ideas and find great ideas.
Sometimes, our time can be almost completely exhausted by evaluating
inefficient proposals, so it's not surprising that rough consensus
building could take time. I suspect we will see consensus moving in
positive directions around the proposals you have highlighted.

When Satoshi originally released the Bitcoin whitepaper, practically
everyone-- somehow with the exception of Hal Finney-- didn't look,
because the costs of evaluating cryptographic system proposals is so
high and everyone was jaded and burned out for the past umpteen
decades. (I have IRC logs from January 10th 2009 where I immediately
dismissed Bitcoin after I had seen its announcement on the
p2pfoundation mailing list, perhaps in retrospect I should not let
family tragedy so greatly impact my evaluation of proposals...). It's
hard to evaluate these proposals. Sometimes it may feel like random
proposals are review-resistant, or designed to burn our time up. But I
think this is more reflective of the simple fact that consensus takes
effort, and it's hard work, and this is to be expected in this sort of
system design.

Your email contains a good summary of recent scaling progress and of
efforts presented at the Hong Kong workshop. I like summaries. I have
previously recommended making more summaries and posting them to the
mailing list. In general, it would be good if developers were to write
summaries of recent work and efforts and post them to the bitcoin-dev
mailing list. BIP drafts are excellent. Long-term proposals are
excellent. Short-term coordination happens over IRC, and that makes
sense to me. But I would point out that many of the developments even
from, say, the Montreal workshop were notably absent from the mailing
list. Unless someone was paying close attention, they wouldn't have
noticed some of those efforts which, in some cases, haven't been
mentioned since. I suspect most of this is a matter of attention,
review and keeping track of loose ends, which can be admittedly
difficult.

Short (or even long) summaries in emails are helpful because they
increase the ability of the community to coordinate and figure out
what's going on. Often I will write an email that summarizes some
content simply because I estimate that I am going to forget the
details in the near future, and if I am going to forget them then it
seems likely that others might This creates a broad base of
proposals and content to build from when we're doing development work
in the future, making for a much richer community as a consequence.
The contributions from the scalingbitcoin.org workshops are a welcome
addition, and the proposal outlined in the above email contains a good
summary of recent progress. We need more of this sort of synthesis,
we're richer for it. I am excitedly looking forward to the impending
onslaught of Bitcoin progress.

- Bryan
http://heybryan.org/
1 512 203 0507
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-07 Thread Gregory Maxwell via bitcoin-dev
The Scaling Bitcoin Workshop in HK is just wrapping up. Many fascinating
proposals were presented. I think this would be a good time to share my
view of the near term arc for capacity increases in the Bitcoin system. I
believe we’re in a fantastic place right now and that the community
is ready to deliver on a clear forward path with a shared vision that
addresses the needs of the system while upholding its values.

I think it’s important to first clearly express some of the relevant
principles that I think should guide the ongoing development of the
Bitcoin system.

Bitcoin is P2P electronic cash that is valuable over legacy systems
because of the monetary autonomy it brings to its users through
decentralization. Bitcoin seeks to address the root problem with
conventional currency: all the trust that's required to make it work--

-- Not that justified trust is a bad thing, but trust makes systems
brittle, opaque, and costly to operate. Trust failures result in systemic
collapses, trust curation creates inequality and monopoly lock-in, and
naturally arising trust choke-points can be abused to deny access to
due process. Through the use of cryptographic proof and decentralized
networks Bitcoin minimizes and replaces these trust costs.

With the available technology, there are fundamental trade-offs between
scale and decentralization. If the system is too costly people will be
forced to trust third parties rather than independently enforcing the
system's rules. If the Bitcoin blockchain’s resource usage, relative
to the available technology, is too great, Bitcoin loses its competitive
advantages compared to legacy systems because validation will be too
costly (pricing out many users), forcing trust back into the system.
If capacity is too low and our methods of transacting too inefficient,
access to the chain for dispute resolution will be too costly, again
pushing trust back into the system.

Since Bitcoin is an electronic cash, it _isn't_ a generic database;
the demand for cheap highly-replicated perpetual storage is unbounded,
and Bitcoin cannot and will not satisfy that demand for non-ecash
(non-Bitcoin) usage, and there is no shame in that. Fortunately, Bitcoin
can interoperate with other systems that address other applications,
and--with luck and hard work--the Bitcoin system can and will satisfy
the world's demand for electronic cash.

Fortunately, a lot of great technology is in the works that make
navigating the trade-offs easier.

First up: after several years in the making Bitcoin Core has recently
merged libsecp256k1, which results in a huge increase in signature
validation performance. Combined with other recent work we're now getting
ConnectTip performance 7x higher in 0.12 than in prior versions. This
has been a long time coming, and without its anticipation and earlier
work such as headers-first I probably would have been arguing for a
block size decrease last year.  This improvement in the state of the
art for widely available production Bitcoin software sets a stage for
some capacity increases while still catching up on our decentralization
deficit. This shifts the bottlenecks off of CPU and more strongly onto
propagation latency and bandwidth.

Versionbits (BIP9) is approaching maturity and will allow the Bitcoin
network to have multiple in-flight soft-forks. Up until now we’ve had to
completely serialize soft-fork work, and also had no real way to handle
a soft-fork that was merged in core but rejected by the network. All
that is solved in BIP9, which should allow us to pick up the pace of
improvements in the network. It looks like versionbits will be ready
for use in the next soft-fork performed on the network.

The next thing is that, at Scaling Bitcoin Hong Kong, Pieter Wuille
presented on bringing Segregated Witness to Bitcoin. What is proposed
is a _soft-fork_ that increases Bitcoin's scalability and capacity by
reorganizing data in blocks to handle the signatures separately, and in
doing so takes them outside the scope of the current blocksize limit.

The particular proposal amounts to a 4MB blocksize increase at worst. The
separation allows new security models, such as skipping downloading data
you're not going to check and improved performance for lite clients
(especially ones with high privacy). The proposal also includes fraud
proofs which make violations of the Bitcoin system provable with a compact
proof. This completes the vision of "alerts" described in the "Simplified
Payment Verification" section of the Bitcoin whitepaper, and would make it
possible for lite clients to enforce all the rules of the system (under
a new strong assumption that they're not partitioned from someone who
would generate the proofs). The design has numerous other features like
making further enhancements safer and eliminating signature malleability
problems. If widely used this proposal gives a 2x capacity increase
(more if multisig is widely used), but most importantly it makes that
additional capa