Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2016-01-18 Thread Anthony Towns via bitcoin-dev
TLDR:

  1.7MB effective block size is a better estimate than 1.6MB for p2pkh
  with segwit. 2MB for 2/2 multisig still seems accurate.

  Additional post-segwit soft forked script improvements can improve
  the effective block size for p2pkh txns from 1.7MB to 1.9MB, and for
  2/2 multisig from 2MB to 2.5MB/3MB.

  (To the best of my knowledge, anyway; if I've made a mistake in my
  maths or assumptions, corrections appreciated)

On Tue, Dec 08, 2015 at 02:58:03PM +1000, Anthony Towns via bitcoin-dev wrote:
> So from IRC, this doesn't seem quite right -- capacity is constrained as
>   base_size + witness_size/4 <= 1MB
..
> That would be 1.6MB and 2MB of total actual data if you hit the limits
> with real transactions, so it's more like a 1.8x increase for real
> transactions afaics, even with substantial use of multisig addresses.

I think these numbers are slightly mistaken -- I was only aware of version
1 segwit scripts at the time, and assumed 256 bit hashes would be used
for all segwit transaction, however version 0 segwit txns would be more
efficient for p2pkh, with the same security as bitcoin currently has
(which seems fine).

Also, segwit will make two additional soft-fork improvements possible that
would have a positive effect on transactions per block without requiring
more data per block: ecdsa public key recovery (more space efficient for
*both* multisig and p2pkh) and schnorr signatures (more space efficient
multisig) which might also improve things. I don't know how soon they're
planned to be worked on post segwit's roll out; basic Schnorr signatures
are in the Elements sidechain, but I don't think key recovery has been
implemented anywhere? (Actually, I guess they could both be done already
via softforking OP_NOP opcodes, though segwit makes them slightly
cleaner)

Anyhoo here's some revised figures, working explained in the footnotes.
If I've made mistakes, corrections appreciated, of course.

p2pkh:

  now: 10+146i+34o [0]
  segwit: 10+41i+36o + 0.25*105*i [1]
  ecdsa recovery: 10+41i+33o + 0.25*71*i [2]
  80-bit schnorr: 10+41i+33o + 0.25*71*i (same as ecdsa recovery imo [3])
  128-bit schnorr: 10+41i+43o + 0.25*106*i [4]

(128-bit schnorr provides a not very useful increase in security here)

2-of-2 multisig:

  now: 10+254i+32o [5]
  segwit: 10+43i+43o + 0.25*213*i [6]
  ecdsa recovery: 10+43i+43o + 0.25+187*i [7]
  80-bit schnorr: 10+41i+33o + 0.25*71*i (same as p2pkh)
  128-bit schnorr: 10+41i+43o + 0.25*106*i (same as p2pkh)

(segwit, ecdsa recovery and 128-bit schnorr all provide a beneficial
security increase here, as per the "Time to worry about 80-bit collision
attacks" thread; 80-bit schnorr provides the same security as current
p2sh multisig)

Using the same assumptions in the previous mail, ie that over the long
term number inputs is about the same as number of outputs, these
simplify to:

p2pkh   2-of-2 msig
now 10+180i 10+286i
segwit  10+104i 10+140i
recov   10+92i  10+133i
sch80   10+92i  10+92i
sch128  10+111i 10+111i

Translating "now" to 100%, the scaling factors work out to be:

i=1, i->inf

p2pkh   2-of-2 msig
now 100%100%
segwit  166%-173%   197%-204%
recov   186%-195%   207%-215%
sch80   186%-195%   290%-310%
sch128  157%-162%   244%-257%

So 170% for p2pkh (rather my original estimate of 160%) and 200% for
multisig (same as my original estimate), which can rise via further
soft-forks up to 190% for p2pkh and 250% or 300% for 2-of-2 multisig
(depending on whether you want additional security for 2/2 multisig
beyond what's currently available).

(I'm assuming people are mostly interested in the number of transactions
per block (or tx/second or tx/day); if miners are worried about the
actual data per block (which effects orphan rates) implied by the above,
but don't want to work it out themselves, I could do the maths for that
too pretty easily. Let me know)


If a 2MB hard fork is done first, then the 1/4 discount for segwit could
mean up to 8MB of total data per block -- from what I understand this
is currently infeasible; so I presume that segwit on top of a hardfork
and prior to IBLT/weak blocks would need to have a smaller discount or
no discount applied so as to ensure total data per block remains at 4MB
or less. With no discount for witness data (ie, no "accounting tricks")
those figures look like:

p2pkh   2-of-2 msig
now 100%100%
segwit  99% 95%
recov   122%-124%   104%
sch80   122%-124%   191%-198%
sch128  94%-95% 148%-150%

That is, without discounting, segwit comes at a slight cost in
transactions per block, and additional soft forks will only result in
25% gain for p2pkh (via key recovery) and 50%-100% for 2-of-2 multisig
(through the use of schnorr sigs and key recovery, and depending on
whether you want 128 bits of security rather than 80 bits).

(So without the discounting factor, with 

Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-21 Thread Anthony Towns via bitcoin-dev
On Mon, Dec 21, 2015 at 05:21:55AM +, Btc Drak via bitcoin-dev wrote:
> On Mon, Dec 21, 2015 at 4:33 AM, Pieter Wuille via bitcoin-dev <
> > So I'd like to ask the community that we work towards this plan, as it
> > allows to make progress without being forced to make a possibly divisive
> > choice for one hardfork or another yet.
> Thank you for saying this. I also think the plan is solid and delivers
> multiple benefits without being contentious. The number of wins are so
> numerous, it's frankly a no-brainer.

+1's are off-topic, but... +1. My impression is that each of libsecp256k1,
versionbits, segregated witness, IBLT, weak blocks, and OP_CSV have
been demonstrated to be significant improvements that are implementable,
and don't introduce any new attacks or risks [0]. There's some freaking
awesome engineering that's gone into all of those.

> I guess the next step for segwit is a BIP and deployment on a testnet?

I think the following proposed features are as yet missing from Pieter's
segwit branch, and I'm guessing patches for them would be appreciated:

 - enforcing the proposed base+witness/4 < 1MB calculation
 - applying limits to sigops seen in witness signatures

I guess there might be other things that still need to be implemented
as well (and presumably bugs of course)?

I think I'm convinced that the proposed plan is the best approach (as
opposed to separate base<1MB, witness<3MB limits, or done as a hard fork,
or without committing to a merkle head for the witnesses, eg), though.

jl2012 already pointed to a draft segwit BIP in another thread, repeated
here though:

 https://github.com/jl2012/bips/blob/segwit/bip-segwit.mediawiki

Cheers,
aj (hoping that was enough content after the +1 to not get modded ;)

[0] I'm still not persuaded that even a small increase in blocksize
doesn't introduce unacceptable risks (frankly, I'm not entirely
persuaded the *current* limits don't have unacceptable risk) and that
frustrates me no end. But I guess (even after six months of reading
arguments about it!) I'm equally unpersuaded that there's actually
more to the intense desire for more blocksize is anything other than
fear/uncertainty/doubt mixed with a desire for transactions to be
effectively free, rather than costing even a few cents each... So,
personally, since the above doesn't really resolve that quandry
for me, it doesn't really resolve the blocksize debate for me
either. YMMV.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-21 Thread Jorge Timón via bitcoin-dev
To clarify, although I have defended the deployment of segwit as a
hardfork, I have no strong opinion on whether to do that or do it as a
softfork first and then do a hardfork to move things out of the
coinbase to a better place.
I have a strong opinion against never doing the later hardfork though.
I would have supported segwit for Bitcoin even if it was only possible
as a hardfork, but there's a softfork version and that will hopefully
accelerate its deployment.
Since the plan seems to be to do a softfork first and a hardfork
moving the witness tree (and probably more things) outside of the
coinbase later, I support the plan for segwit deployment.
In fact, the plan is very exciting to me.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-20 Thread Btc Drak via bitcoin-dev
On Mon, Dec 21, 2015 at 4:33 AM, Pieter Wuille via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Tue, Dec 8, 2015 at 6:07 AM, Wladimir J. van der Laan wrote:
> > On Mon, Dec 07, 2015 at 10:02:17PM +, Gregory Maxwell via
> bitcoin-dev wrote:
> >> TL;DR: I propose we work immediately towards the segwit 4MB block
> >> soft-fork which increases capacity and scalability, and recent speedups
> >> and incoming relay improvements make segwit a reasonable risk. BIP9
> >> and segwit will also make further improvements easier and faster to
> >> deploy. We’ll continue to set the stage for non-bandwidth-increase-based
> >> scaling, while building additional tools that would make bandwidth
> >> increases safer long term. Further work will prepare Bitcoin for further
> >> increases, which will become possible when justified, while also
> providing
> >> the groundwork to make them justifiable.
> >
> > Sounds good to me.
>
> Better late than never, let me comment on why I believe pursuing this plan
> is important.
>
> For months, the block size debate, and the apparent need for agreement on
> a hardfork has distracted from needed engineering work, fed the external
> impression that nothing is being done, and generally created a toxic
> environment to work in. It has affected my own productivity and health, and
> I do not think I am alone.
>
> I believe that soft-fork segwit can help us out of this deadlock and get
> us going again. It does not require the pervasive assumption that the
> entire world will simultaneously switch to new consensus rules like a
> hardfork does, while at the same time:
> * Give a short-term capacity bump
> * Show the world that scalability is being worked on
> * Actually improve scalability (as opposed to just scale) by reducing
> bandwidth/storage and indirectly improving the effectiveness of systems
> like Lightning.
> * Solve several unrelated problems at the same time (fraud proofs, script
> extensibility, malleability, ...).
>
> So I'd like to ask the community that we work towards this plan, as it
> allows to make progress without being forced to make a possibly divisive
> choice for one hardfork or another yet.
>
Thank you for saying this. I also think the plan is solid and delivers
multiple benefits without being contentious. The number of wins are so
numerous, it's frankly a no-brainer.

I guess the next step for segwit is a BIP and deployment on a testnet?
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-20 Thread Douglas Roark via bitcoin-dev
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA512

On 2015/12/20 20:50, Mark Friedenbach via bitcoin-dev wrote:
> I am fully in support of the plan laid out in "Capacity increases 
> for the bitcoin system".
> 
> This plan provides real benefit to the ecosystem in solving a 
> number of longstanding problems in bitcoin. It improves the 
> scalability of bitcoin considerably.
> 
> Furthermore it is time that we stop bikeshedding, start 
> implementing, and move forward, lest we lose more developers to
> the toxic atmosphere this hard-fork debacle has created.

Another +1 here. While I'd still like to see some sort of short-term
bump happen this year - good points have been raised about SegWit
uptake by wallet devs, for one thing - I really do think this is one
of the last pieces of the puzzle that'll make Bitcoin reasonably
stable and robust. If people have legitimate concerns, that's great,
and they should be addressed. I just worry that more navel-gazing and
bikeshedding will play into the hands of those with less than noble
intentions. That and, due to the somewhat complicated nature of
SegWit, it may take time to get skeptical miners and wallet devs on-boar
d.

While we're talking about capacity increases, I'd like to reiterate
that I do think there should be some sort of short-term bump (Jeff's
BIP 102 or his "BIP 202" variant, Dr. Back's 2/4/8 proposal ("BIP
248"), etc.), hopefully chosen by this summer so that everybody can
start to prepare. I believe the KISS theory will work best. I talked
to a couple of miners at Scaling Bitcoin. It was obvious they
generally prefer simple solutions. (For that matter, if I put my
miner's cap on, I prefer simple solutions too!) The research presented
at Scaling Bitcoin regarding block size formulas was quite interesting
and worthy of discussion. The research was also, IMO, nowhere near
ready for consensus. Work and discussions on that front should
certainly continue and push for a more permanent (final?) block size
solution. I just think that, barring some extraordinary solution that
hasn't been widely discussed yet, a permanent solution isn't feasible
right now. A temporary bump isn't ideal. It's just the only thing I've
seen that strikes me as having any real shot at consensus.

- -- 
- ---
Douglas Roark
Cryptocurrency, network security, travel, and art.
https://onename.com/droark
joro...@vt.edu
PGP key ID: 26623924
-BEGIN PGP SIGNATURE-
Comment: GPGTools - https://gpgtools.org

iQIcBAEBCgAGBQJWd44sAAoJEEOBHRomYjkklUkP/AqnD4+oiNNNYRGDY3m0bQSG
noUoRmWG/h86AW+2LuNYtn72UVefWJscUcmXWeOOem1KX49KdtCRWz3UZcrmfPUF
h/ilOpYpjCN69nFBhpJPp+0Jqr/PjQpoZkUQ2G1BznGIcIo3jwh7H7dQeI6PMtLB
qTbfdYEqPawb2kIhrCKVVQqsf7dLjg0Hlzvnq+xqyggZ1+k89kXSMEHJaybras7q
DFj1lOhzktzAtxquzAMcctkZM3JvFMnKUwOP6zC+ke9YlmvU0Yhu74F+30/EClLc
XGL5GMvUtvJcC0VRxDlh4pIW3m+eWjLWxvPQGe58eLE2u2Ja2MNjcuVtJdRgouLI
VSPBrUKoGOGfNfsqJH9U9jsvRuQMvT6JFS3jjxiapgi+ip1O7+Pkbq6tO55Mz7Gd
WMG71HdrLzZtjOzRmOFL5q3CkTpZp75tsXOYxn7jVcJlYJUh/jrnVMvSbPAT/VAY
yJIPtWRj+jtMKAR9m4Lx+9N4F56OC3g0M749v31luoYZkKMl7ohgkONgpKhrDRBU
uVmWH0pUIvaScsJxrUtgZdqn2AUqRowq6nM0YNDKo4go5/LyAkYYi1mICb0O0JJG
mt+3fabix6biBPHZDAvKxKX5CAPDapno2adTBx7vY36evGdhI9sWA1jw91He8Zmw
8hwnRV7R8bPdkoIfnc8e
=jJzD
-END PGP SIGNATURE-
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-20 Thread Mark Friedenbach via bitcoin-dev
I am fully in support of the plan laid out in "Capacity increases for the
bitcoin system".

This plan provides real benefit to the ecosystem in solving a number of
longstanding problems in bitcoin. It improves the scalability of bitcoin
considerably.

Furthermore it is time that we stop bikeshedding, start implementing, and
move forward, lest we lose more developers to the toxic atmosphere this
hard-fork debacle has created.

On Mon, Dec 21, 2015 at 12:33 PM, Pieter Wuille via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Tue, Dec 8, 2015 at 6:07 AM, Wladimir J. van der Laan wrote:
> > On Mon, Dec 07, 2015 at 10:02:17PM +, Gregory Maxwell via
> bitcoin-dev wrote:
> >> TL;DR: I propose we work immediately towards the segwit 4MB block
> >> soft-fork which increases capacity and scalability, and recent speedups
> >> and incoming relay improvements make segwit a reasonable risk. BIP9
> >> and segwit will also make further improvements easier and faster to
> >> deploy. We’ll continue to set the stage for non-bandwidth-increase-based
> >> scaling, while building additional tools that would make bandwidth
> >> increases safer long term. Further work will prepare Bitcoin for further
> >> increases, which will become possible when justified, while also
> providing
> >> the groundwork to make them justifiable.
> >
> > Sounds good to me.
>
> Better late than never, let me comment on why I believe pursuing this plan
> is important.
>
> For months, the block size debate, and the apparent need for agreement on
> a hardfork has distracted from needed engineering work, fed the external
> impression that nothing is being done, and generally created a toxic
> environment to work in. It has affected my own productivity and health, and
> I do not think I am alone.
>
> I believe that soft-fork segwit can help us out of this deadlock and get
> us going again. It does not require the pervasive assumption that the
> entire world will simultaneously switch to new consensus rules like a
> hardfork does, while at the same time:
> * Give a short-term capacity bump
> * Show the world that scalability is being worked on
> * Actually improve scalability (as opposed to just scale) by reducing
> bandwidth/storage and indirectly improving the effectiveness of systems
> like Lightning.
> * Solve several unrelated problems at the same time (fraud proofs, script
> extensibility, malleability, ...).
>
> So I'd like to ask the community that we work towards this plan, as it
> allows to make progress without being forced to make a possibly divisive
> choice for one hardfork or another yet.
>
> --
> Pieter
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-14 Thread Adam Back via bitcoin-dev
I think someone, maybe Pieter, commented on this relay issue that it
would be likely very transitory, as a lot of stuff would be fairly
quickly upgraded in practice from previous deployment experience, and
I think anyway there is a huge excess connectivity and capacity in the
p2p network vs having a connected network of various versions, and
supporting SPV client load (SPV load is quite low relative to
capacity, even one respectable node can support a large number of SPV
clients).

(Ie so two classes of network node and connectivity wouldnt be a
problem in practice even if it did persist; also the higher capacity
better run nodes are more likely to upgrade due to having more clued
in power user, miner, pool or company operators).

Maybe someone more detailed knowledge could clarify further.

Adam

On 14 December 2015 at 19:21, Jonathan Toomim via bitcoin-dev
 wrote:
> This means that a server supporting SW might only hear of the tx data and
> not get the signature data for some transactions, depending on how the relay
> rules worked (e.g. if the SW peers had higher minrelaytxfee settings than
> the legacy peers). This would complicate fast block relay code like IBLTs,
> since we now have to check to see that the recipient has both the tx data
> and the witness/sig data.
>
> The same issue might happen with block relay if we do SW as a soft fork. A
> SW node might see a block inv from a legacy node first, and might start
> downloading the block from that node. This block would then be marked as
> in-flight, and the witness data might not get downloaded. This shouldn't be
> too hard to fix by creating an inv for the witness data as a separate
> object, so that a node could download the block from e.g. Peer 1 and the
> segwit data from Peer 2.
>
> Of course, the code would be simpler if we did this as a hard fork and we
> could rely on everyone on the segwit fork supporting the segwit data.
> Although maybe we want to write the interfaces in a way that supports some
> nodes not downloading the segwit data anyway, just because not every node
> will want that data.
>
> I haven't had time to read sipa's code yet. I apologize for talking out of a
> position of ignorance. For anyone who has, do you feel like sharing how it
> deals with these network relay issues?
>
> By the way, since this thread is really about SegWit and not about any other
> mechanism for increasing Bitcoin capacity, perhaps we should rename it
> accordingly?
>
>
> On Dec 12, 2015, at 11:18 PM, Mark Friedenbach via bitcoin-dev
>  wrote:
>
> A segwit supporting server would be required to support relaying segwit
> transactions, although a non-segwit server could at least inform a wallet of
> segwit txns observed, even if it doesn't relay all information necessary to
> validate.
>
> Non segwit servers and wallets would continue operations as if nothing had
> occurred.
>
> If this means essentially that a soft fork deployment of SegWit will require
> SPV wallet servers to change their logic (or risk not being able to send
> payments) then it does seem to me that a hard fork to deploy this non
> controversial change is not only cleaner (on the data structure side) but
> safer in terms of the potential to affect the user experience.
>
>
> — Regards,
>
>
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-14 Thread Jonathan Toomim via bitcoin-dev
This means that a server supporting SW might only hear of the tx data and not 
get the signature data for some transactions, depending on how the relay rules 
worked (e.g. if the SW peers had higher minrelaytxfee settings than the legacy 
peers). This would complicate fast block relay code like IBLTs, since we now 
have to check to see that the recipient has both the tx data and the 
witness/sig data.

The same issue might happen with block relay if we do SW as a soft fork. A SW 
node might see a block inv from a legacy node first, and might start 
downloading the block from that node. This block would then be marked as 
in-flight, and the witness data might not get downloaded. This shouldn't be too 
hard to fix by creating an inv for the witness data as a separate object, so 
that a node could download the block from e.g. Peer 1 and the segwit data from 
Peer 2.

Of course, the code would be simpler if we did this as a hard fork and we could 
rely on everyone on the segwit fork supporting the segwit data. Although maybe 
we want to write the interfaces in a way that supports some nodes not 
downloading the segwit data anyway, just because not every node will want that 
data.

I haven't had time to read sipa's code yet. I apologize for talking out of a 
position of ignorance. For anyone who has, do you feel like sharing how it 
deals with these network relay issues?

By the way, since this thread is really about SegWit and not about any other 
mechanism for increasing Bitcoin capacity, perhaps we should rename it 
accordingly?


On Dec 12, 2015, at 11:18 PM, Mark Friedenbach via bitcoin-dev 
 wrote:

> A segwit supporting server would be required to support relaying segwit 
> transactions, although a non-segwit server could at least inform a wallet of 
> segwit txns observed, even if it doesn't relay all information necessary to 
> validate.
> 
> Non segwit servers and wallets would continue operations as if nothing had 
> occurred.
> 
> If this means essentially that a soft fork deployment of SegWit will require 
> SPV wallet servers to change their logic (or risk not being able to send 
> payments) then it does seem to me that a hard fork to deploy this non 
> controversial change is not only cleaner (on the data structure side) but 
> safer in terms of the potential to affect the user experience.
> 
> 
> — Regards,



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-12 Thread Mark Friedenbach via bitcoin-dev
A segwit supporting server would be required to support relaying segwit
transactions, although a non-segwit server could at least inform a wallet
of segwit txns observed, even if it doesn't relay all information necessary
to validate.

Non segwit servers and wallets would continue operations as if nothing had
occurred.
If this means essentially that a soft fork deployment of SegWit will
require SPV wallet servers to change their logic (or risk not being able to
send payments) then it does seem to me that a hard fork to deploy this non
controversial change is not only cleaner (on the data structure side) but
safer in terms of the potential to affect the user experience.


— Regards,


On Sat, Dec 12, 2015 at 1:43 AM, Gavin Andresen via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Fri, Dec 11, 2015 at 11:18 AM, Jorge Timón  wrote:
>
>> This is basically what I meant by
>>
>> struct hashRootStruct
>> {
>> uint256 hashMerkleRoot;
>> uint256 hashWitnessesRoot;
>> uint256 hashextendedHeader;
>> }
>>
>> but my design doesn't calculate other_root as it appears in your tree (is
>> not necessary).
>>
>> It is necessary to maintain compatibility with SPV nodes/wallets.
>
> Any code that just checks merkle paths up into the block header would have
> to change if the structure of the merkle tree changed to be three-headed at
> the top.
>
> If it remains a binary tree, then it doesn't need to change at all-- the
> code that produces the merkle paths will just send a path that is one step
> deeper.
>
> Plus, it's just weird to have a merkle tree that isn't a binary tree.
>
> --
> --
> Gavin Andresen
>


___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-11 Thread Gavin Andresen via bitcoin-dev
On Fri, Dec 11, 2015 at 11:18 AM, Jorge Timón  wrote:

> This is basically what I meant by
>
> struct hashRootStruct
> {
> uint256 hashMerkleRoot;
> uint256 hashWitnessesRoot;
> uint256 hashextendedHeader;
> }
>
> but my design doesn't calculate other_root as it appears in your tree (is
> not necessary).
>
> It is necessary to maintain compatibility with SPV nodes/wallets.

Any code that just checks merkle paths up into the block header would have
to change if the structure of the merkle tree changed to be three-headed at
the top.

If it remains a binary tree, then it doesn't need to change at all-- the
code that produces the merkle paths will just send a path that is one step
deeper.

Plus, it's just weird to have a merkle tree that isn't a binary tree.

-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-11 Thread Jorge Timón via bitcoin-dev
On Dec 9, 2015 5:40 PM, "Gavin Andresen"  wrote:
>
> On Wed, Dec 9, 2015 at 3:03 AM, Gregory Maxwell via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:
>>
>> I think it would be logical to do as part of a hardfork that moved
>> commitments generally; e.g. a better position for merged mining (such
>> a hardfork was suggested in 2010 as something that could be done if
>> merged mining was used), room for commitments to additional block
>> back-references for compact SPV proofs, and/or UTXO set commitments.
>> Part of the reason to not do it now is that the requirements for the
>> other things that would be there are not yet well defined. For these
>> other applications, the additional overhead is actually fairly
>> meaningful; unlike the fraud proofs.
>
>
> So just design ahead for those future uses. Make the merkle tree:
>
>
>  root_in_block_header
>  /  \
>   tx_data_root  other_root
>/   \
> segwitness_root reserved_for_future_use_root

This is basically what I meant by

struct hashRootStruct
{
uint256 hashMerkleRoot;
uint256 hashWitnessesRoot;
uint256 hashextendedHeader;
}

but my design doesn't calculate other_root as it appears in your tree (is
not necessary).

Since stop requiring bip34 (height in coinbase) is also a hardfork (and a
trivial one) I suggested to move it at the same time. But thinking more
about it, since BIP34 also elegantly solves BIP30, I would keep the height
in the coinbase (even if we move it to the extented header tree as well for
convenience).
That should be able to include future consensus-enforced commitments (extra
back-refs for compact proofs, txo/utxo commitments, etc) or non-consensus
data (merged mining data, miner-published data).
Greg Maxwell suggested to move those later and I answered fair enough. But
thinking more about it, if the extra commitments field is extensible, we
don't need to move anything now, and therefore we don't need for those
designs (extra back-refs for compact proofs, txo/utxo commitments, etc) to
be ready to deploy a hardfork segregated witness: you just need to make
sure that your format is extensible via softfork in the future.

I'm therefore back to the "let's better deploy segregated witness as a
hardfork" position.
The change required to the softfork segregated witnesses implementation
would be relatively small.

Another option would be to deploy both parts (sw and the movement from the
coinbase to the extra header) at the same time but with different
activation conditions, for example:

- For sw: deploy as soon as possible with bip9.
- For the hardfork codebase to extra header movement: 1 year grace + bip9
for later miner upgrade confirmation.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-09 Thread Gregory Maxwell via bitcoin-dev
On Wed, Dec 9, 2015 at 7:54 AM, Jorge Timón  wrote:
> From this question one could think that when you said "we can do the
> cleanup hardfork later" earlier you didn't really meant it. And that
> you will oppose to that hardfork later just like you are opposing to
> it now.
> As said I disagree that making a softfork first and then move the
> commitment is less disruptive (because people will need to adapt their
> software twice), but if the intention is to never do the second part
> then of course I agree it would be less disruptive.
> How long after the softfork would you like to do the hardfork?
> 1 year after the softfork? 2 years? never?

I think it would be logical to do as part of a hardfork that moved
commitments generally; e.g. a better position for merged mining (such
a hardfork was suggested in 2010 as something that could be done if
merged mining was used), room for commitments to additional block
back-references for compact SPV proofs, and/or UTXO set commitments.
Part of the reason to not do it now is that the requirements for the
other things that would be there are not yet well defined. For these
other applications, the additional overhead is actually fairly
meaningful; unlike the fraud proofs.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-09 Thread Mark Friedenbach via bitcoin-dev
My apologies for the apparent miscommunication earlier. It is of interest
to me that the soft-fork be done which is necessary to put a commitment in
the most efficient spot possible, in part because that commitment could be
used for other data such as the merged mining auxiliary blocks, which are
very sensitive to proof size.

Perhaps we have a different view of how the commitment transaction would be
generated. Just as GBT doesn't create the coinbase, it was my expectation
that it wouldn't generate the commitment transaction either -- but
generation of the commitment would be easy, requiring either the coinbase
txid 100 blocks back, or the commitment txid of the prior transaction (note
this impacts SPV mining). The truncation shouldn't be an issue because the
commitment txn would not be part of the list of transactions selected by
GBT, and in any case the truncation would change the witness data which
changes the commitment.

On Wed, Dec 9, 2015 at 4:03 PM, Gregory Maxwell via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Wed, Dec 9, 2015 at 7:54 AM, Jorge Timón  wrote:
> > From this question one could think that when you said "we can do the
> > cleanup hardfork later" earlier you didn't really meant it. And that
> > you will oppose to that hardfork later just like you are opposing to
> > it now.
> > As said I disagree that making a softfork first and then move the
> > commitment is less disruptive (because people will need to adapt their
> > software twice), but if the intention is to never do the second part
> > then of course I agree it would be less disruptive.
> > How long after the softfork would you like to do the hardfork?
> > 1 year after the softfork? 2 years? never?
>
> I think it would be logical to do as part of a hardfork that moved
> commitments generally; e.g. a better position for merged mining (such
> a hardfork was suggested in 2010 as something that could be done if
> merged mining was used), room for commitments to additional block
> back-references for compact SPV proofs, and/or UTXO set commitments.
> Part of the reason to not do it now is that the requirements for the
> other things that would be there are not yet well defined. For these
> other applications, the additional overhead is actually fairly
> meaningful; unlike the fraud proofs.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-09 Thread Jorge Timón via bitcoin-dev
Fair enough.
On Dec 9, 2015 4:03 PM, "Gregory Maxwell"  wrote:

> On Wed, Dec 9, 2015 at 7:54 AM, Jorge Timón  wrote:
> > From this question one could think that when you said "we can do the
> > cleanup hardfork later" earlier you didn't really meant it. And that
> > you will oppose to that hardfork later just like you are opposing to
> > it now.
> > As said I disagree that making a softfork first and then move the
> > commitment is less disruptive (because people will need to adapt their
> > software twice), but if the intention is to never do the second part
> > then of course I agree it would be less disruptive.
> > How long after the softfork would you like to do the hardfork?
> > 1 year after the softfork? 2 years? never?
>
> I think it would be logical to do as part of a hardfork that moved
> commitments generally; e.g. a better position for merged mining (such
> a hardfork was suggested in 2010 as something that could be done if
> merged mining was used), room for commitments to additional block
> back-references for compact SPV proofs, and/or UTXO set commitments.
> Part of the reason to not do it now is that the requirements for the
> other things that would be there are not yet well defined. For these
> other applications, the additional overhead is actually fairly
> meaningful; unlike the fraud proofs.
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-09 Thread Gavin Andresen via bitcoin-dev
On Wed, Dec 9, 2015 at 3:03 AM, Gregory Maxwell via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> I think it would be logical to do as part of a hardfork that moved
> commitments generally; e.g. a better position for merged mining (such
> a hardfork was suggested in 2010 as something that could be done if
> merged mining was used), room for commitments to additional block
> back-references for compact SPV proofs, and/or UTXO set commitments.
> Part of the reason to not do it now is that the requirements for the
> other things that would be there are not yet well defined. For these
> other applications, the additional overhead is actually fairly
> meaningful; unlike the fraud proofs.
>

So just design ahead for those future uses. Make the merkle tree:


 root_in_block_header
 /  \
  tx_data_root  other_root
   /   \
segwitness_root reserved_for_future_use_root

... where reserved_for_future_use is zero until some future block version
(or perhaps better, is just chosen arbitrarily by the miner and sent along
with the block data until some future block version).

That would minimize future disruption of any code that produced or consumed
merkle proofs of the transaction data or segwitness data, especially if the
reserved_for_future_use_root is allowed to be any arbitrary 256-bit value
and not a constant that would get hard-coded into segwitness-proof-checking
code.


-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-09 Thread Chris via bitcoin-dev
On 12/08/2015 10:12 AM, Gavin Andresen via bitcoin-dev wrote:
> Why segwitness as a soft fork? Stuffing the segwitness merkle tree in
> the coinbase is messy and will just complicate consensus-critical code
> (as opposed to making the right side of the merkle tree in
> block.version=5 blocks the segwitness data).
Agreed. I thought the rule was no contentious hark forks. It seems
hardly anyone opposes this change and there seems to be widespread
agreement that the hardfork version would be much cleaner.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-09 Thread Daniele Pinna via bitcoin-dev
If SegWit were implemented as a hardfork, could the entire blockchain be
reorganized starting from the Genesis block to free up historical space?
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-08 Thread Wladimir J. van der Laan via bitcoin-dev
On Mon, Dec 07, 2015 at 10:02:17PM +, Gregory Maxwell via bitcoin-dev wrote:
> The Scaling Bitcoin Workshop in HK is just wrapping up. Many fascinating
> proposals were presented. I think this would be a good time to share my
> view of the near term arc for capacity increases in the Bitcoin system. I
> believe we’re in a fantastic place right now and that the community
> is ready to deliver on a clear forward path with a shared vision that
> addresses the needs of the system while upholding its values.

Thanks for writing this up. Putting the progress, ongoing work and plans related
to scaling in context, in one place, was badly needed.

> TL;DR:  I propose we work immediately towards the segwit 4MB block
> soft-fork which increases capacity and scalability, and recent speedups
> and incoming relay improvements make segwit a reasonable risk. BIP9
> and segwit will also make further improvements easier and faster to
> deploy. We’ll continue to set the stage for non-bandwidth-increase-based
> scaling, while building additional tools that would make bandwidth
> increases safer long term. Further work will prepare Bitcoin for further
> increases, which will become possible when justified, while also providing
> the groundwork to make them justifiable.

Sounds good to me.

There are multiple ways to get involved in ongoing work, where the community
can help to make this happen sooner:

- Review the versionbits BIP 
https://github.com/bitcoin/bips/blob/master/bip-0009.mediawiki:

  - Compare and test with implementation: 
https://github.com/bitcoin/bitcoin/pull/6816

- Review CSV BIPs (BIP68 
https://github.com/bitcoin/bips/blob/master/bip-0068.mediawiki / 
   BIP112 https://github.com/bitcoin/bips/blob/master/bip-0112.mediawiki),

  - Compare and test implementation: 

https://github.com/bitcoin/bitcoin/pull/6564  BIP-112: Mempool-only 
CHECKSEQUENCEVERIFY
https://github.com/bitcoin/bitcoin/pull/6312  BIP-68: Mempool-only sequence 
number constraint verification 
https://github.com/bitcoin/bitcoin/pull/7184  [WIP] Implement SequenceLocks 
functions for BIP 68

- Segwit BIP is being written, but has not yet been published.

  - Gregory linked to an implementation but as he mentions it is not completely
finished yet. ETA for a Segwit testnet is later this month, then you can 
test as well.

Wladimir
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-08 Thread Jorge Timón via bitcoin-dev
On Dec 8, 2015 7:08 PM, "Wladimir J. van der Laan via bitcoin-dev" <
bitcoin-dev@lists.linuxfoundation.org> wrote:
>   - Gregory linked to an implementation but as he mentions it is not
completely
> finished yet. ETA for a Segwit testnet is later this month, then you
can test as well.

Testnet4 ?
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-08 Thread Gavin Andresen via bitcoin-dev
Thanks for laying out a road-map, Greg.

I'll need to think about it some more, but just a couple of initial
reactions:

Why segwitness as a soft fork? Stuffing the segwitness merkle tree in the
coinbase is messy and will just complicate consensus-critical code (as
opposed to making the right side of the merkle tree in block.version=5
blocks the segwitness data).

It will also make any segwitness fraud proofs significantly larger (merkle
path versus  merkle path to coinbase transactions, plus ENTIRE coinbase
transaction, which might be quite large, plus merkle path up to root).


We also need to fix the O(n^2) sighash problem as an additional BIP for ANY
blocksize increase. That also argues for a hard fork-- it is much easier to
fix it correctly and simplify the consensus code than to continue to apply
band-aid fixes on top of something fundamentally broken.


Segwitness will require a hard or soft-fork rollout, then a significant
fraction of the transaction-producing wallets to upgrade and start
supporting segwitness-style transactions.  I think it will be much quicker
than the P2SH rollout, because the biggest transaction producers have a
strong motivation to lower their fees, and it won't require a new type of
bitcoin address to fund wallets.  But it still feels like it'll be six
months to a year at the earliest before any relief from the current
problems we're seeing from blocks filling up.

Segwitness will make the current bottleneck (block propagation) a little
worse in the short term, because of the extra fraud-proof data.  Benefits
well worth the costs.

--

I think a barrier to quickly getting consensus might be a fundamental
difference of opinion on this:
   "Even without them I believe we’ll be in an acceptable position with
respect to capacity in the near term"

The heaviest users of the Bitcoin network (businesses who generate tens of
thousands of transactions per day on behalf of their customers) would
strongly disgree; the current state of affairs is NOT acceptable to them.



-- 
--
Gavin Andresen
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-08 Thread Justus Ranvier via bitcoin-dev
On 12/08/2015 09:12 AM, Gavin Andresen via bitcoin-dev wrote:
> Stuffing the segwitness merkle tree in the coinbase

If such a change is going to be deployed via a soft fork instead of a
hard fork, then the coinbase is the worst place to put the segwitness
merkle root.

Instead, put it in the first output of the generation transaction as an
OP_RETURN script.

This is a better pattern because coinbase space is limited while output
space is not. The next time there's a good reason to tie another merkle
tree to a block, that proposal can be designated for the second output
of the generation transaction.



0xEAD9E623.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-08 Thread Justus Ranvier via bitcoin-dev
On 12/08/2015 11:41 AM, Mark Friedenbach wrote:
> A far better place than the generation transaction (which I assume means
> coinbase transaction?) is the last transaction in the block. That allows
> you to save, on average, half of the hashes in the Merkle tree.

I don't care what color that bikeshed is painted.

In whatever transaction it is placed, the hash should be on the output
side, That way is more future-proof since it does not crowd out other
hashes which might be equally valuable to commit someday.



0xEAD9E623.asc
Description: application/pgp-keys


signature.asc
Description: OpenPGP digital signature
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-08 Thread Tier Nolan via bitcoin-dev
On Tue, Dec 8, 2015 at 5:41 PM, Mark Friedenbach via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> A far better place than the generation transaction (which I assume means
> coinbase transaction?) is the last transaction in the block. That allows
> you to save, on average, half of the hashes in the Merkle tree.
>

This trick can be improved by only using certain tx counts.  If the number
of transactions is limited to a power of 2 (other than the extra
transactions), then you get a path of length zero.

The number of non-zero bits in the tx count determings how many digests are
required.

https://github.com/TierNolan/bips/blob/aux_header/bip-aux-header.mediawiki

This gets the benefit of a soft-fork, while also keeping the proof lengths
small.  The linked bip has a 105 byte overhead for the path.

The cost is that only certain transaction counts are allowed.  In the worst
case, 12.5% of transactions would have to be left in the memory pool.  This
means around 7% of transactions would be delayed until the next block.

Blank transactions (or just transactions with low latency requirements)
could be used to increase the count so that it is raised to one of the
valid numbers.

Managing the UTXO set to ensure that there is at least one output that pays
to OP_TRUE is also a hassle.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-08 Thread Jonathan Toomim via bitcoin-dev

On Dec 9, 2015, at 8:09 AM, Gregory Maxwell  wrote:

> On Tue, Dec 8, 2015 at 11:48 PM, Jonathan Toomim  wrote:
> 
> By contrast it does not reduce the safety factor for the UTXO set at
> all; which most hold as a much greater concern in general;

I don't agree that "most" hold UTXO as a much greater concern in general. I 
think that it's a concern that has been addressed less, which means it is a 
more unsolved concern. But it is not currently a bottleneck on block size. 
Miners can afford way more RAM than 1 GB, and non-mining full nodes don't need 
to store the UTXO in memory.I think that at the moment, block propagation time 
is the bottleneck, not UTXO size. It confuses me that SigWit is being pushed as 
a short-term fix to the capacity issue when it does not address the short-term 
bottleneck at all.

> and that
> isn't something you can say for a block size increase.

True.

I'd really like to see a grand unified cost metric that includes UTXO 
expansion. In the mean time, I think miners can use a bit more RAM.

> With respect to witness safety factor; it's only needed in the case of
> strategic or malicious behavior by miners-- both concerns which
> several people promoting large block size increases have not only
> disregarded but portrayed as unrealistic fear-mongering. Are you
> concerned about it?

Some. Much less than e.g. Peter Todd, for example, but when other people see 
something as a concern that I don't, I try to pay attention to it. I expect 
Peter wouldn't like the safety factor issue, and I'm surprised he didn't bring 
it up.

Even if I didn't care about adversarial conditions, it would still interest me 
to pay attention to the safety factor for political reasons, as it would make 
subsequent blocksize increases much more difficult. Conspiracy theorists might 
have a field day with that one...

> In any case-- the other improvements described in
> my post give me reason to believe that risks created by that
> possibility will be addressable.

I'll take a look and try to see which of the worst-case concerns can and cannot 
be addressed by those improvements.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-08 Thread Jorge Timón via bitcoin-dev
On Wed, Dec 9, 2015 at 7:29 AM, Gregory Maxwell via bitcoin-dev
 wrote:
> What was being discussed was the location of the witness commitment;
> which is consensus critical regardless of where it is placed. Should
> it be placed in an available location which is compatible with the
> existing network, or should the block hashing data structure
> immediately be changed in an incompatible way to accommodate it in
> order to satisfy an ascetic sense of purity and to make fraud proofs
> somewhat smaller?

>From this question one could think that when you said "we can do the
cleanup hardfork later" earlier you didn't really meant it. And that
you will oppose to that hardfork later just like you are opposing to
it now.
As said I disagree that making a softfork first and then move the
commitment is less disruptive (because people will need to adapt their
software twice), but if the intention is to never do the second part
then of course I agree it would be less disruptive.
How long after the softfork would you like to do the hardfork?
1 year after the softfork? 2 years? never?
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-08 Thread Jonathan Toomim via bitcoin-dev
On Dec 8, 2015, at 6:02 AM, Gregory Maxwell via bitcoin-dev 
 wrote:

> The particular proposal amounts to a 4MB blocksize increase at worst.

I understood that SegWit would allow about 1.75 MB of data in the average case 
while also allowing up to 4 MB of data in the worst case. This means that the 
mining and block distribution network would need a larger safety factor to deal 
with worst-case situations, right? If you want to make sure that nothing goes 
wrong when everything is at its worst, you need to size your network pipes to 
handle 4 MB in a timely (DoS-resistant) fashion, but you'd normally only be 
able to use 1.75 MB of it. It seems to me that it would be safer to use a 3 MB 
limit, and that way you'd also be able to use 3 MB of actual transactions.

As an accounting trick to bypass the 1 MB limit, SegWit sounds like it might 
make things less well accounted for.



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-08 Thread Gregory Maxwell via bitcoin-dev
On Tue, Dec 8, 2015 at 11:48 PM, Jonathan Toomim  wrote:
> I understood that SegWit would allow about 1.75 MB of data in the average
> case while also allowing up to 4 MB of data in the worst case. This means
> that the mining and block distribution network would need a larger safety
> factor to deal with worst-case situations, right? If you want to make sure

By contrast it does not reduce the safety factor for the UTXO set at
all; which most hold as a much greater concern in general; and that
isn't something you can say for a block size increase.

With respect to witness safety factor; it's only needed in the case of
strategic or malicious behavior by miners-- both concerns which
several people promoting large block size increases have not only
disregarded but portrayed as unrealistic fear-mongering. Are you
concerned about it?  In any case-- the other improvements described in
my post give me reason to believe that risks created by that
possibility will be addressable.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-08 Thread Jorge Timón via bitcoin-dev
On Wed, Dec 9, 2015 at 12:59 AM, Gregory Maxwell via bitcoin-dev
 wrote:
> On Tue, Dec 8, 2015 at 3:12 PM, Gavin Andresen via bitcoin-dev
>  wrote:
> We already have consensus critical enforcement there, the height,
> which has almost never been problematic. (A popular block explorer
> recently misimplemented the var-int decode and suffered an outage).

It would be also a nice opportunity to move the height to a more
accessible place.
For example CBlockHeader::hashMerkleRoot (and CBlockIndex's) could be
replaced with a hash of the following struct:

struct hashRootStruct
{
uint256 hashMerkleRoot;
uint256 hashWitnessesRoot;
int32_t nHeight;
}

> From a risk reduction perspective, I think it is much preferable to
> perform the primary change in a backwards compatible manner, and pick
> up the data reorganization in a hardfork if anyone even cares.


But then all wallet software will need to adapt their software twice.
Why introduce technical debt for no good reason?

> I think thats generally a nice cadence to split up risks that way; and
> avoid controversy.

Uncontroversial hardforks can also be deployed with small risks as
described in BIP99.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-08 Thread Jonathan Toomim via bitcoin-dev

On Dec 9, 2015, at 7:50 AM, Jorge Timón  wrote:

> I don't undesrtand. SPV nodes won't think they are validating transactions 
> with the new version unless they adapt to the new format. They will be simply 
> unable to receive payments using the new format if it is a softfork (although 
> as said I agree with making it a hardfork on the simpler design and smaller 
> fraud proofs grounds alone).
> 
Okay, I might just not understand how a sigwit payment would look to current 
software yet. I'll add learning about that to my to-do list...


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-08 Thread Jorge Timón via bitcoin-dev
On Wed, Dec 9, 2015 at 1:58 AM, Jorge Timón  wrote:
> struct hashRootStruct
> {
> uint256 hashMerkleRoot;
> uint256 hashWitnessesRoot;
> int32_t nHeight;
> }

Or better, for forward compatibility (we may want to include more
things apart from nHeight and hashWitnessesRoot in the future):

struct hashRootStruct
{
 uint256 hashMerkleRoot;
 uint256 hashWitnessesRoot;
 uint256 hashextendedHeader;
}

For example, we may want to chose to add an extra nonce there.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-08 Thread Jonathan Toomim via bitcoin-dev
Agree. This data does not belong in the coinbase. That space is for miners to 
use, not devs.

I also think that a hard fork is better for SegWit, as it reduces the size of 
fraud proofs considerably, makes the whole design more elegant and less 
kludgey, and is safer for clients who do not upgrade in a timely fashion. I 
don't like the idea that SegWit would invalidate the security assumptions of 
non-upgraded clients (including SPV wallets). I think that for these clients, 
no data is better than invalid data. Better to force them to upgrade by cutting 
them off the network than to let them think they're validating transactions 
when they're not.


On Dec 8, 2015, at 11:55 PM, Justus Ranvier via bitcoin-dev 
 wrote:

> If such a change is going to be deployed via a soft fork instead of a
> hard fork, then the coinbase is the worst place to put the segwitness
> merkle root.
> 
> Instead, put it in the first output of the generation transaction as an
> OP_RETURN script.
> 
> This is a better pattern because coinbase space is limited while output
> space is not. The next time there's a good reason to tie another merkle
> tree to a block, that proposal can be designated for the second output
> of the generation transaction.



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-08 Thread Jonathan Toomim via bitcoin-dev

On Dec 9, 2015, at 7:48 AM, Luke Dashjr  wrote:

> How about we pursue the SegWit softfork, and at the same time* work on a
> hardfork which will simplify the proofs and reduce the kludgeyness of merge-
> mining in general? Then, if the hardfork is ready before the softfork, they
> can both go together, but if not, we aren't stuck delaying the improvements of
> SegWit until the hardfork is completed.

So that all our code that parses the blockchain needs to be able to find the 
sigwit data in both places? That doesn't really sound like an improvement to 
me. Why not just do it as a hard fork? They're really not that hard to do.


signature.asc
Description: Message signed with OpenPGP using GPGMail
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-08 Thread Gregory Maxwell via bitcoin-dev
On Tue, Dec 8, 2015 at 3:12 PM, Gavin Andresen via bitcoin-dev
 wrote:
> Why segwitness as a soft fork? Stuffing the segwitness merkle tree in the
> coinbase is messy and will just complicate consensus-critical code (as
> opposed to making the right side of the merkle tree in block.version=5
> blocks the segwitness data).

It's nearly complexity-costless to put it in the coinbase transaction.
Exploring the costs is one of the reasons why this was implemented
first.

We already have consensus critical enforcement there, the height,
which has almost never been problematic. (A popular block explorer
recently misimplemented the var-int decode and suffered an outage).

And most but not all prior commitment proposals have suggested the
same or similar.  The exact location is not that critical, however,
and we do have several soft-fork compatible options.

> It will also make any segwitness fraud proofs significantly larger (merkle
> path versus  merkle path to coinbase transactions, plus ENTIRE coinbase
> transaction, which might be quite large, plus merkle path up to root).

Yes, it will make them larger by log2() the number of transaction in a
block which is-- say-- 448 bytes.

With the coinbase transaction thats another couple kilobytes, I think
this is negligible.

From a risk reduction perspective, I think it is much preferable to
perform the primary change in a backwards compatible manner, and pick
up the data reorganization in a hardfork if anyone even cares.

I think thats generally a nice cadence to split up risks that way; and
avoid controversy.

> We also need to fix the O(n^2) sighash problem as an additional BIP for ANY
> blocksize increase.

The witness data is never an input to sighash, so no, I don't agree
that this holds for "any" increase.

> Segwitness will make the current bottleneck (block propagation) a little
> worse in the short term, because of the extra fraud-proof data.  Benefits
> well worth the costs.

The fraud proof data is deterministic, full nodes could skip sending
it between each other, if anyone cared; but the overhead is pretty
tiny in any case.

> I think a barrier to quickly getting consensus might be a fundamental
> difference of opinion on this:
>"Even without them I believe we’ll be in an acceptable position with
> respect to capacity in the near term"
>
> The heaviest users of the Bitcoin network (businesses who generate tens of
> thousands of transactions per day on behalf of their customers) would
> strongly disgree; the current state of affairs is NOT acceptable to them.

My message lays out a plan for several different complementary
capacity advances; it's not referring to the current situation--
though the current capacity situation is no emergency.

I believe it already reflects the emerging consensus in the Bitcoin
Core project; in terms of the overall approach and philosophy, if not
every specific technical detail. It's not a forever plan, but a
pragmatic one that understand that the future is uncertain no matter
what we do; one that trusts that we'll respond to whatever
contingencies surprise us on the road to success.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-08 Thread Luke Dashjr via bitcoin-dev
On Tuesday, December 08, 2015 11:40:42 PM Jonathan Toomim via bitcoin-dev 
wrote:
> Agree. This data does not belong in the coinbase. That space is for miners
> to use, not devs.

This has never been guaranteed, nor are softforks a "dev action" in the first 
place.

> I also think that a hard fork is better for SegWit, as it reduces the size
> of fraud proofs considerably, makes the whole design more elegant and less
> kludgey, and is safer for clients who do not upgrade in a timely fashion.

How about we pursue the SegWit softfork, and at the same time* work on a 
hardfork which will simplify the proofs and reduce the kludgeyness of merge-
mining in general? Then, if the hardfork is ready before the softfork, they 
can both go together, but if not, we aren't stuck delaying the improvements of 
SegWit until the hardfork is completed.

* I have been in fact working on such a proposal for a while now, since before 
SegWit.

> I don't like the idea that SegWit would invalidate the security
> assumptions of non-upgraded clients (including SPV wallets). I think that
> for these clients, no data is better than invalid data. Better to force
> them to upgrade by cutting them off the network than to let them think
> they're validating transactions when they're not.

There isn't an option for "no data", as non-upgraded nodes in a hardfork are 
left completely vulnerable to attacking miners, even much lower hashrate than 
the 51% attack risk. So the alternatives are:
- hardfork: complete loss of all security for the old nodes
- softfork: degraded security for old nodes

Luke
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-08 Thread Gregory Maxwell via bitcoin-dev
On Wed, Dec 9, 2015 at 1:09 AM, Gavin Andresen  wrote:
> Create a 1-megabyte transaction, with all of it's inputs spending
> segwitness-spending SIGHASH_ALL inputs.
>
> Because the segwitness inputs are smaller in the block, you can fit more of
> them into 1 megabyte. Each will hash very close to one megabyte of data.

Witness size comes out of the 1MB at a factor of 0.25. It is not
possible to make a block which has signatures with the full 1MB of
data under the sighash while also having signatures externally.  So
every byte moved into the witness and thus only counted as 25% comes
out of the data being hashed and is hashed nInputs (*checksigs) less
times.

> I think it is a huge mistake not to "design for success" (see
> http://gavinandresen.ninja/designing-for-success ).

We are designing for success; including the success of being able to
adapt and cope with uncertainty-- which is the most critical kind of
success we can have in a world where nothing is and can be
predictable.

> I think it is a huge mistake to pile on technical debt in consensus-critical
> code. I think we should be working harder to make things simpler, not more
> complex, whenever possible.

I agree, but nothing I have advocated creates significant technical
debt. It is also a bad engineering practice to combine functional
changes (especially ones with poorly understood system wide
consequences and low user autonomy) with structural tidying.

> And I think there are pretty big self-inflicted current problems because
> worries about theoretical future problems have prevented us from coming to
> consensus on simple solutions.

That isn't my perspective. I believe we've suffered delays because of
a strong desire to be inclusive and hear out all ideas, and not
forestall market adoption, even for ideas that eschewed pragmatism and
tried to build for forever in a single step and which in our hear of
hearts we knew were not the right path today. It's time to move past
that and get back on track with the progress can make and have been
making, in terms of capacity as well as many other areas. I think that
is designing for success.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-08 Thread Ryan Butler via bitcoin-dev
I see, thanks for clearing that up, I misread what Gavin stated.

On Wed, Dec 9, 2015 at 12:29 AM, Gregory Maxwell  wrote:

> On Wed, Dec 9, 2015 at 4:44 AM, Ryan Butler  wrote:
> >>I agree, but nothing I have advocated creates significant technical
> >>debt. It is also a bad engineering practice to combine functional
> >>changes (especially ones with poorly understood system wide
> >>consequences and low user autonomy) with structural tidying.
> >
> > I don't think I would classify placing things in consensus critical code
> > when it doesn't need to be as "structural tidying".  Gavin said "pile on"
> > which you took as implying "a lot", he can correct me, but I believe he
> > meant "add to".
>
> Nothing being discussed would move something from consensus critical
> code to not consensus critical.
>
> What was being discussed was the location of the witness commitment;
> which is consensus critical regardless of where it is placed. Should
> it be placed in an available location which is compatible with the
> existing network, or should the block hashing data structure
> immediately be changed in an incompatible way to accommodate it in
> order to satisfy an ascetic sense of purity and to make fraud proofs
> somewhat smaller?
>
> I argue that the size difference in the fraud proofs is not
> interesting, the disruption to the network in an incompatible upgrade
> is interesting; and that if it really were desirable reorganization to
> move the commitment point could be done as part of a separate change
> that changes only the location of things (and/or other trivial
> adjustments); and that proceeding int this fashion would minimize
> disruption and risk... by making the incompatible changes that will
> force network wide software updates be as small and as simple as
> possible.
>
> >> (especially ones with poorly understood system wide consequences and low
> >> user autonomy)
> >
> > This implies there you have no confidence in the unit tests and
> functional
> > testing around Bitcoin and should not be a reason to avoid refactoring.
> > It's more a reason to increase testing so that you will have confidence
> when
> > you refactor.
>
> I am speaking from our engineering experience in a  public,
> world-wide, multi-vendor, multi-version, inter-operable, distributed
> system which is constantly changing and in production contains private
> code, unknown and assorted hardware, mixtures of versions, unreliable
> networks, undisclosed usage patterns, and more sources of complex
> behavior than can be counted-- including complex economic incentives
> and malicious participants.
>
> Even if we knew the complete spectrum of possible states for the
> system the combinatioric explosion makes complete testing infeasible.
>
> Though testing is essential one cannot "unit test" away all the risks
> related to deploying a new behavior in the network.
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-08 Thread Mark Friedenbach via bitcoin-dev
Greg, if you have actual data showing that putting the commitment in the
last transaction would be disruptive, and how disruptive, that would be
appreciated. Of the mining hardware I have looked at, none of it cared at
all what transactions other than the coinbase are. You need to provide a
path to the coinbase for extranonce rolling, but the witness commitment
wouldn't need to be updated.

I'm sorry but it's not clear how this would be an incompatible upgrade,
disruptive to anything other than the transaction selection code. Maybe I'm
missing something? I'm not familiar with all the hardware or pooling setups
out there.

On Wed, Dec 9, 2015 at 2:29 PM, Gregory Maxwell via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Wed, Dec 9, 2015 at 4:44 AM, Ryan Butler  wrote:
> >>I agree, but nothing I have advocated creates significant technical
> >>debt. It is also a bad engineering practice to combine functional
> >>changes (especially ones with poorly understood system wide
> >>consequences and low user autonomy) with structural tidying.
> >
> > I don't think I would classify placing things in consensus critical code
> > when it doesn't need to be as "structural tidying".  Gavin said "pile on"
> > which you took as implying "a lot", he can correct me, but I believe he
> > meant "add to".
>
> Nothing being discussed would move something from consensus critical
> code to not consensus critical.
>
> What was being discussed was the location of the witness commitment;
> which is consensus critical regardless of where it is placed. Should
> it be placed in an available location which is compatible with the
> existing network, or should the block hashing data structure
> immediately be changed in an incompatible way to accommodate it in
> order to satisfy an ascetic sense of purity and to make fraud proofs
> somewhat smaller?
>
> I argue that the size difference in the fraud proofs is not
> interesting, the disruption to the network in an incompatible upgrade
> is interesting; and that if it really were desirable reorganization to
> move the commitment point could be done as part of a separate change
> that changes only the location of things (and/or other trivial
> adjustments); and that proceeding int this fashion would minimize
> disruption and risk... by making the incompatible changes that will
> force network wide software updates be as small and as simple as
> possible.
>
> >> (especially ones with poorly understood system wide consequences and low
> >> user autonomy)
> >
> > This implies there you have no confidence in the unit tests and
> functional
> > testing around Bitcoin and should not be a reason to avoid refactoring.
> > It's more a reason to increase testing so that you will have confidence
> when
> > you refactor.
>
> I am speaking from our engineering experience in a  public,
> world-wide, multi-vendor, multi-version, inter-operable, distributed
> system which is constantly changing and in production contains private
> code, unknown and assorted hardware, mixtures of versions, unreliable
> networks, undisclosed usage patterns, and more sources of complex
> behavior than can be counted-- including complex economic incentives
> and malicious participants.
>
> Even if we knew the complete spectrum of possible states for the
> system the combinatioric explosion makes complete testing infeasible.
>
> Though testing is essential one cannot "unit test" away all the risks
> related to deploying a new behavior in the network.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-08 Thread Gregory Maxwell via bitcoin-dev
On Wed, Dec 9, 2015 at 4:44 AM, Ryan Butler  wrote:
>>I agree, but nothing I have advocated creates significant technical
>>debt. It is also a bad engineering practice to combine functional
>>changes (especially ones with poorly understood system wide
>>consequences and low user autonomy) with structural tidying.
>
> I don't think I would classify placing things in consensus critical code
> when it doesn't need to be as "structural tidying".  Gavin said "pile on"
> which you took as implying "a lot", he can correct me, but I believe he
> meant "add to".

Nothing being discussed would move something from consensus critical
code to not consensus critical.

What was being discussed was the location of the witness commitment;
which is consensus critical regardless of where it is placed. Should
it be placed in an available location which is compatible with the
existing network, or should the block hashing data structure
immediately be changed in an incompatible way to accommodate it in
order to satisfy an ascetic sense of purity and to make fraud proofs
somewhat smaller?

I argue that the size difference in the fraud proofs is not
interesting, the disruption to the network in an incompatible upgrade
is interesting; and that if it really were desirable reorganization to
move the commitment point could be done as part of a separate change
that changes only the location of things (and/or other trivial
adjustments); and that proceeding int this fashion would minimize
disruption and risk... by making the incompatible changes that will
force network wide software updates be as small and as simple as
possible.

>> (especially ones with poorly understood system wide consequences and low
>> user autonomy)
>
> This implies there you have no confidence in the unit tests and functional
> testing around Bitcoin and should not be a reason to avoid refactoring.
> It's more a reason to increase testing so that you will have confidence when
> you refactor.

I am speaking from our engineering experience in a  public,
world-wide, multi-vendor, multi-version, inter-operable, distributed
system which is constantly changing and in production contains private
code, unknown and assorted hardware, mixtures of versions, unreliable
networks, undisclosed usage patterns, and more sources of complex
behavior than can be counted-- including complex economic incentives
and malicious participants.

Even if we knew the complete spectrum of possible states for the
system the combinatioric explosion makes complete testing infeasible.

Though testing is essential one cannot "unit test" away all the risks
related to deploying a new behavior in the network.
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-08 Thread Anthony Towns via bitcoin-dev
On Wed, Dec 09, 2015 at 01:31:51AM +, Gregory Maxwell via bitcoin-dev wrote:
> On Wed, Dec 9, 2015 at 1:09 AM, Gavin Andresen  
> wrote:
> > Create a 1-megabyte transaction, with all of it's inputs spending
> > segwitness-spending SIGHASH_ALL inputs.
> > Because the segwitness inputs are smaller in the block, you can fit more of
> > them into 1 megabyte. Each will hash very close to one megabyte of data.
> Witness size comes out of the 1MB at a factor of 0.25. It is not
> possible to make a block which has signatures with the full 1MB of
> data under the sighash while also having signatures externally.  So
> every byte moved into the witness and thus only counted as 25% comes
> out of the data being hashed and is hashed nInputs (*checksigs) less
> times.

So the worst case script I can come up with is:

   1 0 {2OVER CHECKSIG ADD CODESEP} OP_EQUAL

which (if I didn't mess it up) would give you a redeem script of about
36B plus 4B per sigop, redeemable via a single signature that's valid
for precisely one of the checksigs.

Maxing out 20k sigops gives 80kB of redeemscript in that case; so you
could have to hash 19.9GB of data to fully verify the script with
current bitcoin rules.

Segwit with the 75% factor and the same sigop limit would make that very
slightly worse -- it'd up the hashed data by maybe 1MB in total. Without
a sigop limit at all it'd be severely worse of course -- you could fit
almost 500k sigops in 2MB of witness data, leaving 500kB of base data,
for a total of 250GB of data to hash to verify your 3MB block...

Segwit without the 75% factor, but with a 3MB of witness data limit,
makes that up to three times worse (750k sigops in 3MB of witness data,
with 1MB of base data for 750GB of data to hash), but with any reasonable
sigop limit, afaics it's pretty much the same.

However I think you could add some fairly straightforward (maybe
soft-forking) optimisations to just rule out that sort of (deliberate)
abuse; eg disallowing more than a dozen sigops per input, or just failing
checksigs with the same key in a single input, maybe. So maybe that's
not sufficiently realistic?

I think the only realistic transactions that would cause lots of sigs and
hashing are ones that have lots of inputs that each require a signature
or two, so might happen if a miner is cleaning up dust. In that case,
your 1MB transaction is a single output with a bunch of 41B inputs. If you
have 10k such inputs, that's only 410kB. If each input is a legitimate
2 of 2 multisig, that's about 210 bytes of witness data per input, or
2.1MB, leaving 475kB of base data free, which matches up. 20k sigops by
475kB of data is 9.5GB of hashing.

Switching from 2-of-2 multisig to just a single public key would prevent
you from hitting the sigop limit; I think you could hit 14900 signatures
with about 626kB of base data and 1488kB of witness data, for about
9.3GB of hashed data.

That's a factor of 2x improvement over the deliberately malicious exploit
case above, but it's /only/ a factor of 2x.

I think Rusty's calculation http://rusty.ozlabs.org/?p=522 was that
the worst case for now is hashing about 406kB, 3300 times for 1.34GB of
hashed data [0].

So that's still almost a factor of 4 or 5 worse than what's possible now?
Unless I messed up the maths somewhere?

Cheers,
aj

[0] Though I'm not sure that's correct? Seems like with a 1MB
transaction with i inputs, each with s bytes of scriptsig, that you're
hashing (1MB-s*i), and the scriptsig for a p2pkh should only be about
105B, not 180B.  So maximising i*(1MB-s*i) = 1e6*i - 105*i^2 gives i =
1e6/210, so 4762 inputs, and hashing 500kB of data each time,
for about 2.4GB of hashed data total.

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-07 Thread Anthony Towns via bitcoin-dev
On Tue, Dec 08, 2015 at 05:21:18AM +, Gregory Maxwell via bitcoin-dev wrote:
> On Tue, Dec 8, 2015 at 4:58 AM, Anthony Towns via bitcoin-dev
>  wrote:
> > Having a cost function rather than separate limits does make it easier to
> > build blocks (approximately) optimally, though (ie, just divide the fee by
> > (base_bytes+witness_bytes/4) and sort). Are there any other benefits?
> Actually being able to compute fees for your transaction: If there are
> multiple limits that are "at play" then how you need to pay would
> depend on the entire set of other candidate transactions, which is
> unknown to you.

Isn't that solvable in the short term, if miners just agree to order
transactions via a cost function, without enforcing it at consensus
level until a later hard fork that can also change the existing limits
to enforce that balance?

(1MB base + 3MB witness + 20k sigops) with segwit initially, to something
like (B + W + 200*U + 40*S < 5e6) where B is base bytes, W is witness
bytes, U is number of UTXOs added (or removed) and S is number of sigops,
or whatever factors actually make sense.

I guess segwit does allow soft-forking more sigops immediately -- segwit
transactions only add sigops into the segregated witness, which doesn't
get counted for existing consensus. So it would be possible to take the
opposite approach, and make the rule immediately be something like:

  50*S < 1M
  B + W/4 + 25*S' < 1M

(where S is sigops in base data, and S' is sigops in witness) and
just rely on S trending to zero (or soft-fork in a requirement that
non-segregated witness transactions have fewer than B/50 sigops) so that
there's only one (linear) equation to optimise, when deciding fees or
creating a block. (I don't see how you could safely set the coefficient
for S' too much smaller though)

B+W/4+25*S' for a 2-in/2-out p2pkh would still be 178+206/4+25*2=280
though, which would allow 3570 transactions per block, versus 2700 now,
which would only be a 32% increase...

> These don't, however, apply all that strongly if only one limit is
> likely to be the limiting limit... though I am unsure about counting
> on that; after all if the other limits wouldn't be limiting, why have
> them?

Sure, but, at least for now, there's already two limits that are being
hit. Having one is *much* better than two, but I don't think two is a
lot better than three?

(Also, the ratio between the parameters doesn't necessary seem like a
constant; it's not clear to me that hardcoding a formula with a single
limit is actually better than hardcoding separate limits, and letting
miners/the market work out coefficients that match the sort of contracts
that are actually being used)

> > That seems kinda backwards.
> It can seem that way, but all limiting schemes have pathological cases
> where someone runs up against the limit in the most costly way. Keep
> in mind that casual pathological behavior can be suppressed via
> IsStandard like rules without baking them into consensus; so long as
> the candidate attacker isn't miners themselves. Doing so where
> possible can help avoid cases like the current sigops limiting which
> is just ... pretty broken.

Sure; it just seems to be halving the increase in block space (60% versus
100% extra for p2pkh, 100% versus 200% for 2/2 multisig p2sh) for what
doesn't actually look like that much of a benefit in fee comparisons?

I mean, as far as I'm concerned, segwit is great even if it doesn't buy
any improvement in transactions/block, so even a 1% gain is brilliant.
I'd just rather the 100%-200% gain I was expecting. :)

Cheers,
aj
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Capacity increases for the Bitcoin system.

2015-12-07 Thread Bryan Bishop via bitcoin-dev
On Mon, Dec 7, 2015 at 4:02 PM, Gregory Maxwell wrote:
> The Scaling Bitcoin Workshop in HK is just wrapping up. Many fascinating
> proposals were presented. I think this would be a good time to share my
> view of the near term arc for capacity increases in the Bitcoin system. I
> believe we’re in a fantastic place right now and that the community
> is ready to deliver on a clear forward path with a shared vision that
> addresses the needs of the system while upholding its values.

ACK.

One of the interesting take-aways from the workshops for me has been
that there is a large discrepancy between what developers are doing
and what's more widely known. When I was doing initial research and
work for my keynote at the Montreal conference (
http://diyhpl.us/~bryan/irc/bitcoin/scalingbitcoin-review.pdf -- an
attempt at being exhaustive, prior to seeing the workshop proposals ),
what I was most surprised by was the discrepancy between what we think
is being talked about versus what has been emphasized or socially
processed (lots of proposals appear in text, but review efforts are
sometimes "hidden" in corners of github pull request comments, for
example). As another example, the libsecp256k1 testing work reached a
level unseen except perhaps in the aerospace industry, but these sorts
of details are not apparent if you are reading bitcoin-dev archives.
It is very hard to listen to all ideas and find great ideas.
Sometimes, our time can be almost completely exhausted by evaluating
inefficient proposals, so it's not surprising that rough consensus
building could take time. I suspect we will see consensus moving in
positive directions around the proposals you have highlighted.

When Satoshi originally released the Bitcoin whitepaper, practically
everyone-- somehow with the exception of Hal Finney-- didn't look,
because the costs of evaluating cryptographic system proposals is so
high and everyone was jaded and burned out for the past umpteen
decades. (I have IRC logs from January 10th 2009 where I immediately
dismissed Bitcoin after I had seen its announcement on the
p2pfoundation mailing list, perhaps in retrospect I should not let
family tragedy so greatly impact my evaluation of proposals...). It's
hard to evaluate these proposals. Sometimes it may feel like random
proposals are review-resistant, or designed to burn our time up. But I
think this is more reflective of the simple fact that consensus takes
effort, and it's hard work, and this is to be expected in this sort of
system design.

Your email contains a good summary of recent scaling progress and of
efforts presented at the Hong Kong workshop. I like summaries. I have
previously recommended making more summaries and posting them to the
mailing list. In general, it would be good if developers were to write
summaries of recent work and efforts and post them to the bitcoin-dev
mailing list. BIP drafts are excellent. Long-term proposals are
excellent. Short-term coordination happens over IRC, and that makes
sense to me. But I would point out that many of the developments even
from, say, the Montreal workshop were notably absent from the mailing
list. Unless someone was paying close attention, they wouldn't have
noticed some of those efforts which, in some cases, haven't been
mentioned since. I suspect most of this is a matter of attention,
review and keeping track of loose ends, which can be admittedly
difficult.

Short (or even long) summaries in emails are helpful because they
increase the ability of the community to coordinate and figure out
what's going on. Often I will write an email that summarizes some
content simply because I estimate that I am going to forget the
details in the near future, and if I am going to forget them then it
seems likely that others might This creates a broad base of
proposals and content to build from when we're doing development work
in the future, making for a much richer community as a consequence.
The contributions from the scalingbitcoin.org workshops are a welcome
addition, and the proposal outlined in the above email contains a good
summary of recent progress. We need more of this sort of synthesis,
we're richer for it. I am excitedly looking forward to the impending
onslaught of Bitcoin progress.

- Bryan
http://heybryan.org/
1 512 203 0507
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev