Re: [bitcoin-dev] BIP Proposal: Compact Client Side Filtering for Light Clients

2017-06-08 Thread Olaoluwa Osuntokun via bitcoin-dev
> Correct me if I'm wrong, but from my interpretation we can't use that
> method as described as we need to output 64-bit integers rather than
> 32-bit integers.

Had a chat with gmax off-list and came to the realization that the method
_should_ indeed generalize to our case of outputting 64-bit integers.
We'll need to do a bit of bit twiddling to make it work properly. I'll
modify our implementation and report back with some basic benchmarks.

-- Laolu


On Thu, Jun 8, 2017 at 8:42 PM Olaoluwa Osuntokun  wrote:

> Gregory wrote:
> > I see the inner loop of construction and lookup are free of
> > non-constant divmod. This will result in implementations being
> > needlessly slow
>
> Ahh, sipa brought this up other day, but I thought he was referring to the
> coding loop (which uses a power of 2 divisor/modulus), not the
> siphash-then-reduce loop.
>
> > I believe this can be fixed by using this approach
> >
> http://lemire.me/blog/2016/06/27/a-fast-alternative-to-the-modulo-reduction/
> > which has the same non-uniformity as mod but needs only a multiply and
> > shift.
>
> Very cool, I wasn't aware of the existence of such a mapping.
>
> Correct me if I'm wrong, but from my interpretation we can't use that
> method as described as we need to output 64-bit integers rather than
> 32-bit integers. A range of 32-bits would be constrain the number of items
> we could encode to be ~4096 to ensure that we don't overflow with fp
> values such as 20 (which we currently use in our code).
>
> If filter commitment are to be considered for a soft-fork in the future,
> then we should definitely optimize the construction of the filters as much
> as possible! I'll look into that paper you referenced to get a feel for
> just how complex the optimization would be.
>
> > Shouldn't all cases in your spec where you have N=transactions be
> > n=indexed-outputs? Otherwise, I think your golomb parameter and false
> > positive rate are wrong.
>
> Yep! Nice catch. Our code is correct, but mistake in the spec was an
> oversight on my part. I've pushed a commit[1] to the bip repo referenced
> in the OP to fix this error.
>
> I've also pushed another commit to explicitly take advantage of the fact
> that P is a power-of-two within the coding loop [2].
>
> -- Laolu
>
> [1]:
> https://github.com/Roasbeef/bips/commit/bc5c6d6797f3df1c4a44213963ba12e72122163d
> [2]:
> https://github.com/Roasbeef/bips/commit/578a4e3aa8ec04524c83bfc5d14be1b2660e7f7a
>
>
> On Wed, Jun 7, 2017 at 2:41 PM Gregory Maxwell  wrote:
>
>> On Thu, Jun 1, 2017 at 7:01 PM, Olaoluwa Osuntokun via bitcoin-dev
>>  wrote:
>> > Hi y'all,
>> >
>> > Alex Akselrod and I would like to propose a new light client BIP for
>> > consideration:
>> >*
>> https://github.com/Roasbeef/bips/blob/master/gcs_light_client.mediawiki
>>
>> I see the inner loop of construction and lookup are free of
>> non-constant divmod. This will result in implementations being
>> needlessly slow (especially on arm, but even on modern x86_64 a
>> division is a 90 cycle-ish affair.)
>>
>> I believe this can be fixed by using this approach
>>
>> http://lemire.me/blog/2016/06/27/a-fast-alternative-to-the-modulo-reduction/
>>which has the same non-uniformity as mod but needs only a multiply
>> and shift.
>>
>> Otherwise fast implementations will have to implement the code to
>> compute bit twiddling hack exact division code, which is kind of
>> complicated. (e.g. via the technique in "{N}-bit Unsigned Division via
>> {N}-bit Multiply-Add" by Arch D. Robison).
>>
>> Shouldn't all cases in your spec where you have N=transactions be
>> n=indexed-outputs? Otherwise, I think your golomb parameter and false
>> positive rate are wrong.
>>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP Proposal: Compact Client Side Filtering for Light Clients

2017-06-08 Thread Olaoluwa Osuntokun via bitcoin-dev
Hi y'all,

Thanks for all the comments so far!

I've pushed a series of updates to the text of the BIP repo linked in the
OP.
The fixes include: typos, components of the specification which were
incorrect
(N is the total number of items, NOT the number of txns in the block), and a
few sections have been clarified.

The latest version also includes a set of test vectors (as CSV files), which
for a series of fp rates (1/2 to 1/2^32) includes (for 6 testnet blocks,
one of
which generates a "null" filter):

   * The block height
   * The block hash
   * The raw block itself
   * The previous basic+extended filter header
   * The basic+extended filter header for the block
   * The basic+extended filter for the block

The size of the test vectors was too large to include in-line within the
document, so we put them temporarily in a distinct folder [1]. The code
used to
generate the test vectors has also been included.

-- Laolu

[1]: https://github.com/Roasbeef/bips/tree/master/gcs_light_client


On Thu, Jun 1, 2017 at 9:49 PM Olaoluwa Osuntokun  wrote:

> > In order to consider the average+median filter sizes in a world worth
> larger
> > blocks, I also ran the index for testnet:
> >
> > * total size:  2753238530
> > * total avg:  5918.95736054141
> > * total median:  60202
> > * total max:  74983
> > * regular size:  1165148878
> > * regular avg:  2504.856172982827
> > * regular median:  24812
> > * regular max:  64554
> > * extended size:  1588089652
> > * extended avg:  3414.1011875585823
> > * extended median:  35260
> > * extended max:  41731
> >
>
> Oops, realized I made a mistake. These are the stats for Feb 2016 until
> about a
> month ago (since height 400k iirc).
>
> -- Laolu
>
>
> On Thu, Jun 1, 2017 at 12:01 PM Olaoluwa Osuntokun 
> wrote:
>
>> Hi y'all,
>>
>> Alex Akselrod and I would like to propose a new light client BIP for
>> consideration:
>>*
>> https://github.com/Roasbeef/bips/blob/master/gcs_light_client.mediawiki
>>
>> This BIP proposal describes a concrete specification (along with a
>> reference implementations[1][2][3]) for the much discussed client-side
>> filtering reversal of BIP-37. The precise details are described in the
>> BIP, but as a summary: we've implemented a new light-client mode that uses
>> client-side filtering based off of Golomb-Rice coded sets. Full-nodes
>> maintain an additional index of the chain, and serve this compact filter
>> (the index) to light clients which request them. Light clients then fetch
>> these filters, query the locally and _maybe_ fetch the block if a relevant
>> item matches. The cool part is that blocks can be fetched from _any_
>> source, once the light client deems it necessary. Our primary motivation
>> for this work was enabling a light client mode for lnd[4] in order to
>> support a more light-weight back end paving the way for the usage of
>> Lightning on mobile phones and other devices. We've integrated neutrino
>> as a back end for lnd, and will be making the updated code public very
>> soon.
>>
>> One specific area we'd like feedback on is the parameter selection. Unlike
>> BIP-37 which allows clients to dynamically tune their false positive rate,
>> our proposal uses a _fixed_ false-positive. Within the document, it's
>> currently specified as P = 1/2^20. We've done a bit of analysis and
>> optimization attempting to optimize the following sum:
>> filter_download_bandwidth + expected_block_false_positive_bandwidth. Alex
>> has made a JS calculator that allows y'all to explore the affect of
>> tweaking the false positive rate in addition to the following variables:
>> the number of items the wallet is scanning for, the size of the blocks,
>> number of blocks fetched, and the size of the filters themselves. The
>> calculator calculates the expected bandwidth utilization using the CDF of
>> the Geometric Distribution. The calculator can be found here:
>> https://aakselrod.github.io/gcs_calc.html. Alex also has an empirical
>> script he's been running on actual data, and the results seem to match up
>> rather nicely.
>>
>> We we're excited to see that Karl Johan Alm (kallewoof) has done some
>> (rather extensive!) analysis of his own, focusing on a distinct encoding
>> type [5]. I haven't had the time yet to dig into his report yet, but I
>> think I've read enough to extract the key difference in our encodings: his
>> filters use a binomial encoding _directly_ on the filter contents, will we
>> instead create a Golomb-Coded set with the contents being _hashes_ (we use
>> siphash) of the filter items.
>>
>> Using a fixed fp=20, I have some stats detailing the total index size, as
>> well as averages for both mainnet and testnet. For mainnet, using the
>> filter contents as currently described in the BIP (basic + extended), the
>> total size of the index comes out to 6.9GB. The break down is as follows:
>>
>> * total size:  6976047156
>> * 

Re: [bitcoin-dev] BIP Proposal: Compact Client Side Filtering for Light Clients

2017-06-08 Thread Olaoluwa Osuntokun via bitcoin-dev
Tomas wrote:
> A rough estimate would indicate this to be about 2-2.5x as big per block
> as your proposal, but comes with rather different security
> characteristics, and would not require download since genesis.

Our proposal _doesnt_ require downloading from genesis, if by
"downloading" you mean downloading all the blocks. Clients only need to
sync the block+filter headers, then (if they don't care about historical
blocks), will download filters from their "birthday" onwards.

> The client could verify the TXIDs against the merkle root with a much
> stronger (PoW) guarantee compared to the guarantee based on the assumption
> of peers being distinct, which your proposal seems to make

Our proposal only makes a "one honest peer" assumption, which is the same
as any other operating mode. Also as client still download all the
headers, they're able to verify PoW conformance/work as normal.

> I don't completely understand the benefit of making the outpoints and
> pubkey hashes (weakly) verifiable. These only serve as notifications and
> therefore do not seem to introduce an attack vector.

Not sure what you mean by this. Care to elaborate?

> I think client-side filtering is definitely an important route to take,
> but is it worth compressing away the information to verify the merkle
> root?

That's not the case with our proposal. Clients get the _entire_ block (if
they need it), so they can verify the merkle root as normal. Unless one of
us is misinterpreting the other here.

-- Laolu


On Thu, Jun 8, 2017 at 6:34 AM Tomas via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Thu, Jun 1, 2017, at 21:01, Olaoluwa Osuntokun via bitcoin-dev wrote:
>
> Hi y'all,
>
> Alex Akselrod and I would like to propose a new light client BIP for
> consideration:
>*
> https://github.com/Roasbeef/bips/blob/master/gcs_light_client.mediawiki
>
>
>
> Very interesting.
>
> I would like to consider how this compares to another light client type
> with rather different security characteristics where each client would
> receive for each transaction in each block,
>
> * The TXID (uncompressed)
> * The spent outpoints (with TXIDs compressed)
> * The pubkey hash (compressed to reasonable amount of false positives)
>
> A rough estimate would indicate this to be about 2-2.5x as big per block
> as your proposal, but comes with rather different security characteristics,
> and would not require download since genesis.
>
> The client could verify the TXIDs against the merkle root with a much
> stronger (PoW) guarantee compared to the guarantee based on the assumption
> of peers being distinct, which your proposal seems to make. Like your
> proposal this removes the privacy and processing  issues from server-side
> filtering, but unlike your proposal retrieval of all txids in each block
> can also serve for a basis of fraud proofs and (disprovable) fraud hints,
> without resorting to full block downloads.
>
> I don't completely understand the benefit of making the outpoints and
> pubkey hashes (weakly) verifiable. These only serve as notifications and
> therefore do not seem to introduce an attack vector. Omitting data is
> always possible, so receiving data is a prerequisite for verification, not
> an assumption that can be made.  How could an attacker benefit from "hiding
> notifications"?
>
> I think client-side filtering is definitely an important route to take,
> but is it worth compressing away the information to verify the merkle root?
>
> Regards,
> Tomas van der Wansem
> bitcrust
>
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP Proposal: Compact Client Side Filtering for Light Clients

2017-06-08 Thread Olaoluwa Osuntokun via bitcoin-dev
Gregory wrote:
> I see the inner loop of construction and lookup are free of
> non-constant divmod. This will result in implementations being
> needlessly slow

Ahh, sipa brought this up other day, but I thought he was referring to the
coding loop (which uses a power of 2 divisor/modulus), not the
siphash-then-reduce loop.

> I believe this can be fixed by using this approach
>
http://lemire.me/blog/2016/06/27/a-fast-alternative-to-the-modulo-reduction/
> which has the same non-uniformity as mod but needs only a multiply and
> shift.

Very cool, I wasn't aware of the existence of such a mapping.

Correct me if I'm wrong, but from my interpretation we can't use that
method as described as we need to output 64-bit integers rather than
32-bit integers. A range of 32-bits would be constrain the number of items
we could encode to be ~4096 to ensure that we don't overflow with fp
values such as 20 (which we currently use in our code).

If filter commitment are to be considered for a soft-fork in the future,
then we should definitely optimize the construction of the filters as much
as possible! I'll look into that paper you referenced to get a feel for
just how complex the optimization would be.

> Shouldn't all cases in your spec where you have N=transactions be
> n=indexed-outputs? Otherwise, I think your golomb parameter and false
> positive rate are wrong.

Yep! Nice catch. Our code is correct, but mistake in the spec was an
oversight on my part. I've pushed a commit[1] to the bip repo referenced
in the OP to fix this error.

I've also pushed another commit to explicitly take advantage of the fact
that P is a power-of-two within the coding loop [2].

-- Laolu

[1]:
https://github.com/Roasbeef/bips/commit/bc5c6d6797f3df1c4a44213963ba12e72122163d
[2]:
https://github.com/Roasbeef/bips/commit/578a4e3aa8ec04524c83bfc5d14be1b2660e7f7a


On Wed, Jun 7, 2017 at 2:41 PM Gregory Maxwell  wrote:

> On Thu, Jun 1, 2017 at 7:01 PM, Olaoluwa Osuntokun via bitcoin-dev
>  wrote:
> > Hi y'all,
> >
> > Alex Akselrod and I would like to propose a new light client BIP for
> > consideration:
> >*
> https://github.com/Roasbeef/bips/blob/master/gcs_light_client.mediawiki
>
> I see the inner loop of construction and lookup are free of
> non-constant divmod. This will result in implementations being
> needlessly slow (especially on arm, but even on modern x86_64 a
> division is a 90 cycle-ish affair.)
>
> I believe this can be fixed by using this approach
>
> http://lemire.me/blog/2016/06/27/a-fast-alternative-to-the-modulo-reduction/
>which has the same non-uniformity as mod but needs only a multiply
> and shift.
>
> Otherwise fast implementations will have to implement the code to
> compute bit twiddling hack exact division code, which is kind of
> complicated. (e.g. via the technique in "{N}-bit Unsigned Division via
> {N}-bit Multiply-Add" by Arch D. Robison).
>
> Shouldn't all cases in your spec where you have N=transactions be
> n=indexed-outputs? Otherwise, I think your golomb parameter and false
> positive rate are wrong.
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP Proposal: Compact Client Side Filtering for Light Clients

2017-06-08 Thread Olaoluwa Osuntokun via bitcoin-dev
Karl wrote:

> I am also curious if you have considered digests containing multiple
> blocks. Retaining a permanent binsearchable record of the entire chain is
> obviously too space costly, but keeping the last X blocks as binsearchable
> could speed up syncing for clients tremendously, I feel.

Originally we hadn't considered such an idea. Grasping the concept a bit
better, I can see how that may result in considerable bandwidth savings
(for purely negative queries) for clients doing a historical sync, or
catching up to the chain after being inactive for months/weeks.

If we were to purse tacking this approach onto the current BIP proposal,
we could do it in the following way:

   * The `getcfilter` message gains an additional "Level" field. Using
 this field, the range of blocks to be included in the returned filter
 would be Level^2. So a level of 0 is just the single filter, 3 is 8
 blocks past the block hash etc.

   * Similarly, the `getcfheaders` message would also gain a similar field
 with identical semantics. In this case each "level" would have a
 distinct header chain for clients to verify.

> How fast are these to create? Would it make sense to provide digests on
> demand in some cases, rather than keeping them around indefinitely?

For larger blocks (like the one referenced at the end of this mail) full
construction of the regular filter takes ~10-20ms (most of this spent
extracting the data pushes). With smaller blocks, it quickly dips down to
the nano to micro second range.

Whether to keep _all_ the filters on disk, or to dynamically re-generate a
particular range (possibly most of the historical data) is an
implementation detail. Nodes that already do block pruning could discard
very old filters once the header chain is constructed allowing them to
save additional space, as it's unlikely most clients would care about the
first 300k or so blocks.

> Ahh, so you actually make a separate digest chain with prev hashes and
> everything. Once/if committed digests are soft forked in, it seems a bit
> overkill but maybe it's worth it.

Yep, this is only a hold-over until when/if a commitment to the filter is
soft-forked in. In that case, there could be some extension message to
fetch the filter hash for a particular block, along with a merkle proof of
the coinbase transaction to the merkle root in the header.

> I created digests for all blocks up until block #469805 and actually ended
> up with 5.8 GB, which is 1.1 GB lower than what you have, but may be worse
> perf-wise on false positive rates and such.

Interesting, are you creating the equivalent of both our "regular" and
"extended" filters? Each of the filter types consume about ~3.5GB in
isolation, with the extended filter type on average consuming more bytes
due to the fact that it includes sigScript/witness data as well.

It's worth noting that those numbers includes the fixed 4-byte value for
"N" that's prepended to each filter once it's serialized (though that
doesn't add a considerable amount of overhead).  Alex and I were
considering instead using Bitcoin's var-int encoding for that number
instead. This would result in using a single byte for empty filters, 1
byte for most filters (< 2^16 items), and 3 bytes for the remainder of the
cases.

> For comparison, creating the digests above (469805 of them) took
> roughly 30 mins on my end, but using the kstats format so probably
> higher on an actual node (should get around to profiling that...).

Does that include the time required to read the blocks from disk? Or just
the CPU computation of constructing the filters? I haven't yet kicked off
a full re-index of the filters, but for reference this block[1] on testnet
takes ~18ms for the _full_ indexing routine with our current code+spec.

[1]: 052184fbe86eff349e31703e4f109b52c7e6fa105cd1588ab6aa

-- Laolu


On Sun, Jun 4, 2017 at 7:18 PM Karl Johan Alm via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Sat, Jun 3, 2017 at 2:55 AM, Alex Akselrod via bitcoin-dev
>  wrote:
> > Without a soft fork, this is the only way for light clients to verify
> that
> > peers aren't lying to them. Clients can request headers (just hashes of
> the
> > filters and the previous headers, creating a chain) and look for
> conflicts
> > between peers. If a conflict is found at a certain block, the client can
> > download the block, generate a filter, calculate the header by hashing
> > together the previous header and the generated filter, and banning any
> peers
> > that don't match. A full node could prune old filters if you wanted and
> > recalculate them as necessary if you just keep the filter header chain
> info
> > as really old filters are unlikely to be requested by correctly written
> > software but you can't guarantee every client will follow best practices
> > either.
>
> Ahh, so you actually make a separate digest chain with prev hashes and
> everything. 

Re: [bitcoin-dev] BIP Proposal: Compact Client Side Filtering for Light Clients

2017-06-08 Thread Tomas via bitcoin-dev
On Thu, Jun 1, 2017, at 21:01, Olaoluwa Osuntokun via bitcoin-dev wrote:> Hi 
y'all, 
> 
> Alex Akselrod and I would like to propose a new light client BIP for
> consideration: 
>* https://github.com/Roasbeef/bips/blob/master/gcs_light_client.mediawiki> 
> 

Very interesting. 

I would like to consider how this compares to another light client type
with rather different security characteristics where each client would
receive for each transaction in each block,
* The TXID (uncompressed)
* The spent outpoints (with TXIDs compressed)
* The pubkey hash (compressed to reasonable amount of false positives)

A rough estimate would indicate this to be about 2-2.5x as big per block
as your proposal, but comes with rather different security
characteristics, and would not require download since genesis.
The client could verify the TXIDs against the merkle root with a much
stronger (PoW) guarantee compared to the guarantee based on the
assumption of peers being distinct, which your proposal seems to make.
Like your proposal this removes the privacy and processing  issues from
server-side filtering, but unlike your proposal retrieval of all txids
in each block can also serve for a basis of  fraud proofs and
(disprovable) fraud hints, without resorting to full block downloads.
I don't completely understand the benefit of making the outpoints and
pubkey hashes (weakly) verifiable. These only serve as notifications and
therefore do not seem to introduce an attack vector. Omitting data is
always possible, so receiving data is a prerequisite for verification,
not an assumption that can be made.  How could an attacker benefit from
"hiding notifications"?
I think client-side filtering is definitely an important route to
take, but is it worth compressing away the information to verify the
merkle root?
Regards,
Tomas van der Wansem
bitcrust

___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Replay attacks make BIP148 and BIP149 untennable

2017-06-08 Thread Conner Fromknecht via bitcoin-dev
I don't normally post here, but I'm sorry, if you don't see those two as
equal, then I think you have misunderstood the *entire* value proposition
of cryptocurrencies.

The state of any cryptocurrency should entirely (and only) be defined by
its ledger. If the state of the system can be altered outside of the rules
governing its ledger, then the system isn't secure. It doesn't matter
whether the people making those changes are the ones that are leading the
project or not. An "irregular state change" is a fancy term for a bailout.

I'm sure I speak for more than myself in saying that an "irregular state
change" is equivalent to modifying the underlying ledger. Let's not let
semantics keep us from recognizing what actually took place.

-Conner

On Wed, Jun 7, 2017 at 14:14 Nick Johnson via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> wrote:

> On Wed, Jun 7, 2017 at 5:27 PM Tao Effect  wrote:
>
>> Nick,
>>
>> Please don't spread misinformation. Whatever you think of the DAO hard
>> fork, it's a simple fact that the Ethereum ledger was not edited.
>>
>>
>> This sort of email is unhelpful to this conversation, and it certainly
>> doesn't help with the perception that Ethereum is nothing but a bunch of
>> hypocritical Bankers 2.0.
>>
>
>
>>
>> Everyone knows you didn't edit Ethereum Classic, but the the hard fork,
>> which was re-branded as Ethereum, was edited.
>>
>
> That's not what I was suggesting. My point is that the ledger was never
> edited. An 'irregular state change' was added at a specific block height,
> but the ledger remains inviolate.
>
> I'm sure I don't have to explain the difference between the ledger and the
> state to you, or why it's significant that the ledger wasn't (and can't be,
> practically) modified.
>
> -Nick
>
>
>> - Greg
>>
>> --
>> Please do not email me anything that you are not comfortable also sharing 
>> with
>> the NSA.
>>
>> On Jun 7, 2017, at 6:25 AM, Nick Johnson  wrote:
>>
>> On Wed, Jun 7, 2017 at 12:02 AM Gregory Maxwell via bitcoin-dev <
>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>
>>> On Tue, Jun 6, 2017 at 10:39 PM, Tao Effect via bitcoin-dev
>>>  wrote:
>>> > I believe the severity of replay attacks is going unvoiced and is not
>>> > understood within the bitcoin community because of their lack of
>>> experience
>>> > with them.
>>>
>>> Please don't insult our community-- the issues with replay were
>>> pointed out by us to Ethereum in advance and were cited specifically
>>> in prior hardfork discussions long before Ethereum started editing
>>> their ledger for the economic benefit of its centralized
>>> administrators.
>>
>>
>> Please don't spread misinformation. Whatever you think of the DAO hard
>> fork, it's a simple fact that the Ethereum ledger was not edited.
>>
>> -Nick Johnson
>>
>>
>> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Replay attacks make BIP148 and BIP149 untennable

2017-06-08 Thread Nick Johnson via bitcoin-dev
On Thu, Jun 8, 2017 at 6:44 AM Conner Fromknecht  wrote:

> I don't normally post here, but I'm sorry, if you don't see those two as
> equal, then I think you have misunderstood the *entire* value proposition
> of cryptocurrencies.
>
> The state of any cryptocurrency should entirely (and only) be defined by
> its ledger. If the state of the system can be altered outside of the rules
> governing its ledger, then the system isn't secure.


This is true of any blockchain: you can always change the rules with the
consent of the participants.


> It doesn't matter whether the people making those changes are the ones
> that are leading the project or not. An "irregular state change" is a fancy
> term for a bailout.
>
> I'm sure I speak for more than myself in saying that an "irregular state
> change" is equivalent to modifying the underlying ledger. Let's not let
> semantics keep us from recognizing what actually took place.
>

It's not; modifying the ledger would rewrite history, erasing the record of
the original transactions. That's a fundamentally different operation, both
technically and semantically.


> -Conner
>
> On Wed, Jun 7, 2017 at 14:14 Nick Johnson via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> On Wed, Jun 7, 2017 at 5:27 PM Tao Effect  wrote:
>>
>>> Nick,
>>>
>>> Please don't spread misinformation. Whatever you think of the DAO hard
>>> fork, it's a simple fact that the Ethereum ledger was not edited.
>>>
>>>
>>> This sort of email is unhelpful to this conversation, and it certainly
>>> doesn't help with the perception that Ethereum is nothing but a bunch of
>>> hypocritical Bankers 2.0.
>>>
>>
>>
>>>
>>> Everyone knows you didn't edit Ethereum Classic, but the the hard fork,
>>> which was re-branded as Ethereum, was edited.
>>>
>>
>> That's not what I was suggesting. My point is that the ledger was never
>> edited. An 'irregular state change' was added at a specific block height,
>> but the ledger remains inviolate.
>>
>> I'm sure I don't have to explain the difference between the ledger and
>> the state to you, or why it's significant that the ledger wasn't (and can't
>> be, practically) modified.
>>
>> -Nick
>>
>>
>>> - Greg
>>>
>>> --
>>> Please do not email me anything that you are not comfortable also sharing 
>>> with
>>> the NSA.
>>>
>>> On Jun 7, 2017, at 6:25 AM, Nick Johnson  wrote:
>>>
>>> On Wed, Jun 7, 2017 at 12:02 AM Gregory Maxwell via bitcoin-dev <
>>> bitcoin-dev@lists.linuxfoundation.org> wrote:
>>>
 On Tue, Jun 6, 2017 at 10:39 PM, Tao Effect via bitcoin-dev
  wrote:
 > I believe the severity of replay attacks is going unvoiced and is not
 > understood within the bitcoin community because of their lack of
 experience
 > with them.

 Please don't insult our community-- the issues with replay were
 pointed out by us to Ethereum in advance and were cited specifically
 in prior hardfork discussions long before Ethereum started editing
 their ledger for the economic benefit of its centralized
 administrators.
>>>
>>>
>>> Please don't spread misinformation. Whatever you think of the DAO hard
>>> fork, it's a simple fact that the Ethereum ledger was not edited.
>>>
>>> -Nick Johnson
>>>
>>>
>>> ___
>> bitcoin-dev mailing list
>> bitcoin-dev@lists.linuxfoundation.org
>> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>>
>
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] User Activated Soft Fork Split Protection

2017-06-08 Thread James Hilliard via bitcoin-dev
On Wed, Jun 7, 2017 at 8:01 PM, Jared Lee Richardson  wrote:
>> If you're looking for hard numbers at this point you aren't likely to
>> find them because not everything is easy to measure directly.
>
> There's quite a few hard numbers that are available that are of varying use.
> Mining commitments are a major one because of the stalled chain problem.
> Node signaling represents some data because while it can be sybiled, they
> are cheap but not free to run.  Upvotes and comments on reddit and other
> forums might be of some use, but there's not a clear supermajority driving
> every pro-uasf comment up and every anti-uasf comment down, and Reddit
> obscures the upvote/downvotes pretty well.  It could be a gleaned datapoint
> if someone pulled the comments, manually evaluated their likely position on
> the matter(neutrally), and then reported on it, but that is a lot of work
> and I think it is unlikely to show anything except how deep the rifts in the
> community are.  Of the two main statistics available, they do not support
> the idea that UASF has any chance of success.  Of the third, it at least
> shows that there is deep opposition that is nearly equal to the support
> amongst the forums most likely to support UASF.
Right, it's not straight forward to measure because the hard numbers
that we do have tell an incomplete story. In addition the metric that
BIP148 primarily depends on(economic support) is much harder to
measure than other metrics such as hashpower support.
>
> So I'll take anything, any statistic that actually indicates UASF has a
> chance in hell of succeeding, at least that would be worth something.
> Otherwise it's all much ado about nothing.
>
>> We'll know more as we get closer to BIP148 activation by looking at the
>> markets.
>
> What markets?  Where?  How would we know?
There will likely be some exchanges offering markets for each side of
a potential split separately ahead of BIP148 activation.
>
>> > It doesn't have those issues during the segwit activation, ergo there is
>> > no
>> > reason for segwit-activation problems to take priority over the very
>> > real
>> > hardfork activation problems.
>
>> And yet segwit2x is insisting on activation bundling which needlessly
>> complicates and delays SegWit activation.
>
> Because it is not segwit that has appears to have the supermajority
> consensus.
I think you've misunderstood the situation, SegWit has widespread
support but has been turned into a political bargaining chip for other
less desirable changes that do not have widespread support.
>
>> Sure, technical changes can be made for political reasons, we should
>> at least be clear in regards to why particular decisions are being
>> made. I'm supportive of a hard fork for technical reasons but not
>> political ones as are many others.
>
> Well, then we have a point of agreement at least. :)
>
>
> On Wed, Jun 7, 2017 at 5:44 PM, James Hilliard 
> wrote:
>>
>> On Wed, Jun 7, 2017 at 7:20 PM, Jared Lee Richardson 
>> wrote:
>> >> Not really, there are a few relatively simple techniques such as RBF
>> >> which can be leveraged to get confirmations on on-side before double
>> >> spending on another. Once a transaction is confirmed on the non-BIP148
>> >> chain then the high fee transactions can be made on only the BIP148
>> >> side of the split using RBF.
>> >
>> > Ah, so the BIP148 client handles this on behalf of its less technical
>> > users
>> > on their behalf then, yes?
>> It's not automatic but exchanges will likely handle it on behalf of
>> the less technical users. BIP148 is not intended to cause a permanent
>> chain split however which is why this was not built in.
>> >
>> >>  Exchanges will likely do this splitting
>> >> automatically for uses as well.
>> >
>> > Sure, Exchanges are going to dedicate hundreds of developer hours and
>> > thousands of support hours to support something that they've repeatedly
>> > told
>> > everyone must have replay protection to be supported.  They're going to
>> > do
>> > this because 8% of nodes and <0.5% of miners say they'll be rewarded
>> > richly.
>> > Somehow I find that hard to believe.
>> They are very likely to, most have contingency plans for this sort of
>> thing ready to go due to their experience with the ETH/ETC fork.
>> >
>> > Besides, if the BIP148 client does it for them, they wouldn't have to
>> > dedicate those hundreds of developer hours.  Right?
>> >
>> > I can't imagine how this logic is getting you from where the real data
>> > is to
>> > the assumption that an economic majority will push BIP148 into being
>> > such a
>> > more valuable chain that switching chains will be attractive to enough
>> > miners.  There's got to be some real data that convinces you of this
>> > somewhere?
>> If you're looking for hard numbers at this point you aren't likely to
>> find them because not everything is easy to measure directly.
>> >
>> >> Both are issues, but