Re: [bitcoin-dev] Test cases for Taproot signature message

2021-09-17 Thread Riccardo Casatta via bitcoin-dev
Hi Giacomo,

I wrote the rust implementation of bitcoin signature messages and to
double-check I created some test vectors you can see at
https://github.com/rust-bitcoin/rust-bitcoin/blob/b7f984972ad6cb4942827c2b7c401f590588cdcf/src/util/sighash.rs#L689-L799.
These vectors have been created printing intermediate results from
https://github.com/bitcoin/bitcoin/blob/6401de0133e32a641ed9e78a85b3aa337c75d190/test/functional/feature_taproot.py

Il giorno gio 16 set 2021 alle ore 23:40 Giacomo Caironi via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> ha scritto:

> Hi,
> recently I have worked on a python implementation of bitcoin signature
> messages, and I have found that there was way better documentation about
> Segwit signature message than Taproot.
>
> 1) Segwit signature message got its own BIP, completed with test cases
> regarding only that specific function; Taproot on the other hand has the
> signature message function defined in BIP 341 and the test vectors in a
> different BIP (341). This is confusing. Shouldn't we create a different BIP
> only for Taproot signature message exactly like Segwit?
>
> 2) The test vectors for Taproot have no documentation and, most
> importantly, they are not atomic, in the sense that they do not target a
> specific part of the taproot code but all of it. This may not be a very big
> problem, but for signature verification it is. Because there are hashes
> involved, we can't really debug why a signature message doesn't pass
> validation, either it is valid or it is not. BIP 143 in this case is really
> good, because it provides hash preimages, so it is possible to debug the
> function and see where something went wrong. Because of this, writing the
> Segwit signature hash function took a fraction of the time compared to
> Taproot.
>
> If this idea is accepted I will be more than happy to write the test cases
> for Taproot.
>
> BTW this is the first time I contribute to Bitcoin, let me know if I was
> rude or did something wrong. Moreover english is not my first language, so
> I apologize if I wrote something awful above
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>


-- 
Riccardo Casatta - @RCasatta 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Progress on bech32 for future Segwit Versions (BIP-173)

2020-10-20 Thread Riccardo Casatta via bitcoin-dev
Here is a mainnet tx done with aqua wallet, which is based on rust-bitcoin
https://blockstream.info/tx/b48a59fa9e036e997ba733904f631b1a64f5274be646698e49fd542141ca9404?expand

I am not sure about the scriptpubkey starting with 51 so I opened this
https://github.com/rust-bitcoin/rust-bitcoin/pull/504


Il giorno mar 20 ott 2020 alle ore 05:32 Rusty Russell via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> ha scritto:
>
> Rusty Russell  writes:
> > Accepts
> > ---
> > Green: ef1662fd2eb736612afb1b60e3efabfdf700b1c4822733d9dbe1bfee607a5b9b
> > blockchain.info:
64b0fcb22d57b3c920fee1a97b9facec5b128d9c895a49c7d321292fb4156c21
>
> PEBKAC.  Pasted wrong address.  Here are correct results:
>
> Rejects
> ---
> c-lightning: "Could not parse destination address, destination should be
a valid address"
> Phoenix: "Invalid data.  Please try again."
> blockchain.info: "Your Bitcoin transaction failed to send. Please try
again."
>
> Accepts
> ---
> Green: 9e4ab6617a2983439181a304f0b4647b63f51af08fdd84b0676221beb71a8f21
>
> Cheers,
> Rusty.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev



--
Riccardo Casatta - @RCasatta
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[bitcoin-dev] PSBT in QR codes

2020-04-27 Thread Riccardo Casatta via bitcoin-dev
Hi all,

there is some discussion happening [1] about how to encode a PSBT in QR
codes.

According to the specification (page 15 [2]) a version 40 QR code could
contain up to 3706 bytes of data, however practical limitation are much
lower and a PSBT could grow bigger anyway. so the issue is that a PSBT does
not fit in 1 QR code.

There are proposals suggesting animated QR codes but I don't think it's a
good idea for the following reasons:
* they are not easy to print
* it's not clear, by a human look, how much data it's being transferred,
thus allowing more space for attacks
* old hardware may have resource constraint and not being able to scan

There are proposals suggesting alphanumeric mode for QR codes and a header
(like message 1 of n) to allow data reconstruction. Main argument for this
choices are:
* use of built-in standard scanner
* data is copypasteable
* not a big loose in efficiency comparing to binary with a proper encoding
* industrial QR code scanner put a \r at the end of transmission (making
binary mode difficult to handle with timeouts or similar)

I don't think alphanumeric with custom headers it's a good idea and I think
we should use binary encoding and using the already available mode in QR
code specification called "structured append" (page 55 [2]). Corresponding
counter-points are:
* since data need to be reconstructed, I would avoid built-in scanner and
manual appending of strings anyway.
* we can keep the already used base64 for copypaste
* the best of the encoding we already have, bech32, is 10% less efficient
than binary and if we want to be more efficient we need to introduce a new
specific encoding
* I don't have a strong counter-point on industrial scanner, however if
they use \r to signal end of transmission they don't support well binary at
all, why they don't send how many bytes they read?

There are some doubts about support of structured append in QR code
libraries which is not widely supported. While this is true I verified the
widely diffused zxing library on Android and Luca Vaccaro verified the
Apple built-in scanner, and both this libraries let's you access to the
scanned raw bytes, allowing to parse the structured append header.
For reference, structured append allows to chain up to 16 qr codes, and
contains 1 byte of parity.

[1] https://github.com/cryptoadvance/specter-diy/issues/57
[2]
https://www.swisseduc.ch/informatik/theoretische_informatik/qr_codes/docs/qr_standard.pdf


--
Riccardo Casatta - @RCasatta
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Draft BIP for SNICKER

2019-11-06 Thread Riccardo Casatta via bitcoin-dev
Hello Adam,

are you sure you can't tackle the watch-only issue?

What if the proposer create the coinjoin-tx, plus another tx (encrypted
with the shared secret) which is a 1 input-1 output (1to1) tx which spend
his output to another of his key.
At this point when the receiver accept the proposal tx he could create
other tx 1to1 which are spending his tweaked output to pure bip32 derived
key, he than broadcast together the coinjoin tx and for every output of the
coinjoin tx one other tx which is a 1to1 tx.

Notes:
* We are obviously spending more fee because there are more txs involved
but the receiver ends up having only bip32 derived outputs.
* The receiver must create the 1to1 tx or the receiver lose privacy by
being the only one to create 1to1 tx
* a good strategy could be to let the coinjoin tx have a very low fee,
while the 1to1 tx an higher one so there is less risk that only the
coinjoin gets mined
* Whit this spending strategy, the wallet initial scan does not need to be
modified


Il giorno mar 22 ott 2019 alle ore 15:29 AdamISZ via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> ha scritto:

> Just to chime in on these points:
>
> My discussions with ghost43 and ThomasV led me to the same conclusion, at
> least in general, for the whole watch-only issue:
>
> It's necessary that the key tweak (`c` as per draft BIP) be known by
> Proposer (because has to add it to transaction before signing) and Receiver
> (to check ownership), but must not be known by anyone else (else Coinjoin
> function fails), hence it can't be publically derivable in any way but must
> require information secret to the two parties. This can be a pure random
> sent along with the encrypted proposal (the original concept), or based on
> such, or implicit via ECDH (arubi's suggestion, now in the draft, requiring
> each party to access their own secret key). So I reached the same
> conclusion: the classic watch-only use case of monitoring a wallet in real
> time with no privkey access is incompatible with this.
>
> It's worth mentioning a nuance, however: distinguish two requirements: (1)
> to recover from zero information and (2) to monitor in real time as new
> SNICKER transactions arrive.
>
> For (2) it's interesting to observe that the tweak `c` is not a
> money-controlling secret; it's only a privacy-controlling secret. If you
> imagined two wallets, one hot and one cold, with the second tracking the
> first but having a lower security requirement because cold, then the `c`
> values could be sent along from the hot to the cold, as they are created,
> without changing the cold's security model as they are not
> money-controlling private keys. They should still be encrypted of course,
> but that's largely a technical detail, if they were exposed it would only
> break the effect of the coinjoin outputs being indistinguishable.
>
> For (1) the above does not apply; for there, we don't have anyone telling
> us what `c` values to look for, we have to somehow rederive, and to do that
> we need key access, so it reverts to the discussion above about whether it
> might be possible to interact with the cold wallet 'manually' so to speak.
>
> To be clear, I don't think either of the above paragraphs describe things
> that are particularly likely to be implemented, but the hot/cold monitoring
> is at least feasible, if there were enough desire for it.
>
> At the higher level, how important is this? I guess it just depends; there
> are similar problems (not identical, and perhaps more addressable?) in
> Lightning; importing keys is generally non-trivial; one can always sweep
> non-standard keys back into the HD tree, but clearly that is not really a
> solution in general; one can mark out wallets/seeds of this type as
> distinct; not all wallets need to have watch-only (phone wallets? small
> wallets? lower security?) one can prioritise spends of these coins. Etc.
>
> Some more general comments:
>
> Note Elichai's comment on the draft (repeated here for local convenience:
> https://gist.github.com/AdamISZ/2c13fb5819bd469ca318156e2cf25d79#gistcomment-3014924)
> about AES-GCM vs AES-CBC, any thoughts?
>
> I didn't discuss the security of the construction for a Receiver from a
> Proposer who should after all be assumed to be an attacker (except, I
> emphasised that PSBT parsing could be sensitive on this point); I hope it's
> clear to everyone that the construction Q = P + cG is only controllable by
> the owner of the discrete log of P (trivial reduction: if an attacker who
> knows c, can find the private key q of Q, he can derive the private key p
> of P as q - c, thus he is an ECDLP cracker).
>
> Thanks for all the comments so far, it's been very useful.
>
> AdamISZ/waxwing/Adam Gibson
>
> Sent with ProtonMail Secure Email.
>
> ‐‐‐ Original Message ‐‐‐
> On Monday, October 21, 2019 4:04 PM, SomberNight via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
> > > The SNICKER recovery process is, of course, only 

Re: [bitcoin-dev] Draft BIP for SNICKER

2019-10-21 Thread Riccardo Casatta via bitcoin-dev
The "Receiver" could immediately create a tx that spend the coinjoin
outputs to bip32 keys,
The hard part is that he had to delay the broadcast otherwise he loose
privacy

Il giorno lun 21 ott 2019 alle ore 02:08 David A. Harding via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> ha scritto:

> On Sun, Oct 20, 2019 at 12:29:25AM +, SomberNight via bitcoin-dev
> wrote:
> > waxwing, ThomasV, and I recently had a discussion about implementing
> > SNICKER in Electrum; specifically the "Receiver" role.
>
> That'd be awesome!
>
> > As the referenced section [0] explains, the "Receiver" can restore
> > from seed, and assuming he knows he needs to do extra scanning steps
> > (e.g. via a seed version that signals SNICKER support), he can find
> > and regain access to his SNICKER outputs. However, to calculate `c` he
> > needs access to his private keys, as it is the ECDH of one of the
> > Receiver's pubkeys and one of the Proposer's pubkeys.
> >
> > This means the proposed scheme is fundamentally incompatible with
> > watch-only wallets.
> >
> > [0]
> https://gist.github.com/AdamISZ/2c13fb5819bd469ca318156e2cf25d79#Storage_of_Keys
>
> Your logic seems correct for the watching half of the wallet, but I
> think it's ok to consider requiring interaction with the cold wallet.
> Let's look at the recovery procedure from the SNICKER documentation
> that you kindly cited:
>
> 1. Derive all regular addresses normally (doable watch-only for
> wallets using public BIP32 derivation)
>
> 2. Find all transactions spending an output for each of those
> addresses.  Determine whether the spend looks like a SNICKER
> coinjoin (e.g. "two equal-[value] outputs").  (doable watch-only)
>
> 3. "For each of those transactions, check, for each of the two equal
> sized outputs, whether one destination address can be regenerated
> from by taking c found in the method described above" (not doable
> watch only; requires private keys)
>
> I'd expect the set of candidate transactions produced in step #2 to be
> pretty small and probably with no false positives for users not
> participating in SNICKER coinjoins or doing lots of payment batching.
> That means, if any SNICKER candidates were found by a watch-only wallet,
> they could be compactly bundled up and the user could be encouraged to
> copy them to the corresponding cold wallet using the same means used for
> PSBTs (e.g. USB drive, QR codes, etc).  You wouldn't even need the whole
> transactions, just the BIP32 index of the user's key, the pubkey of the
> suspected proposer, and a checksum of the resultant address.
>
> The cold wallet could then perform step #3 using its private keys and
> return a file/QRcode/whatever to the hot wallet telling it any shared
> secrets it found.
>
> This process may need to be repeated several times if an output created
> by one SNICKER round is spent in a subsequent SNICKER round.  This can be
> addressed by simply refusing to participate in chains of SNICKER
> transactions or by refusing to participant in chains of SNICKERs more
> than n long (requring a maximum n rounds of recovery).  It could also be
> addressed by the watching-only wallet looking ahead at the block chain a
> bit in order to grab SNICKER-like child and grandchild transactions of
> our SNICKER candidates and sending them also to the cold wallet for
> attempted shared secret recovery.
>
> The SNICKER recovery process is, of course, only required for wallet
> recovery and not normal wallet use, so I don't think a small amount of
> round-trip communication between the hot wallet and the cold wallet is
> too much to ask---especially since anyone using SNICKER with a
> watching-only wallet must be regularly interacting with their cold
> wallet anyway to sign the coinjoins.
>
> -Dave
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>


-- 
Riccardo Casatta - @RCasatta 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP 158 Flexibility and Filter Size

2018-06-06 Thread Riccardo Casatta via bitcoin-dev
Sorry if I continue on the subject even if
​custom filter types are considered in BIP 157/158
.
I am doing it
 because
​:
1)​
with a fixed target FP=2^-20  (or 1/784931)
​ and the multi layer filtering maybe it's reasonable to consider less than
~20 bits for the golomb encoding of the per-block filter (one day committed
in the blockchain)
2) based on the answer received, privacy leak if downloading a subset of
filters doesn't look a concern
3)
As far as I know, anyone is considering to use a map instead of a filter
for the upper layers of the filter​.

Simplistic example:
Suppose to have a 2 blocks blockchain, every block contains N items for the
filter:
1) In the current discussed filter we have 2 filters of 20N bits
2) In a two layer solution, we have 1 map of (10+1)2N bits and 2 filters of
10N bits
The additional bit in the map discriminate if the match is in the first or
in the second block.
Supposing to have 1 match in the two blocks, the filter size downloaded in
the first case is always 40N bits, while the expected downloaded size in
the second case is 22N+2^-10*10N+10N ~= 32N with the same FP because
independence.
This obviously isn't a full analysis of the methodology, the expected
downloaded size in the second case could go from the best case 22N bits to
the worst case of 42N bits...

@Gregory
> I don't know what portion of coins created are spent in the same 144 block
window...

About 50%
source code <https://github.com/RCasatta/coincount>

>From block 393216 to 458752  (still waiting for results on all the
blockchain)
Total outputs 264185587
size: 2 spent: 11791058 ratio:0.04463172322871649
size: 4 spent: 29846090 ratio:0.11297395266305728
size: 16 spent: 72543182 ratio:0.2745917475051355
size: 64 spent: 113168726 ratio:0.4283682818775424
size: 144 spent: 134294070 ratio:0.508332311103709
size: 256 spent: 148824781 ratio:0.5633342177747191
size: 1024 spent: 179345566 ratio:0.6788620379960395
size: 4096 spent: 205755628 ratio:0.7788298761355213
size: 16384 spent: 224448158 ratio:0.849585174379706

Another point to consider is that if we don't want the full transaction
history of our wallet but only the UTXO, the upper layer map could contain
only the item which are not already spent in the considered window. As we
can see from the previous result if the window is 16384 ~85% of the
elements are already spent suggesting a very high time locality. (apart
144, I choose power of 2 windows so there are an integer number of bits in
the map)

It's possible we need ~20 bits anyway for the per-block filters because
there are always connected wallets which one synced, always download the
last filter, anyway the upper layer map looks very promising for longer
sync.

Il giorno mer 6 giu 2018 alle ore 03:13 Olaoluwa Osuntokun <
laol...@gmail.com> ha scritto:

> It isn't being discussed atm (but was discussed 1 year ago when the BIP
> draft was originally published), as we're in the process of removing items
> or filters that aren't absolutely necessary. We're now at the point where
> there're no longer any items we can remove w/o making the filters less
> generally useful which signals a stopping point so we can begin widespread
> deployment.
>
> In terms of a future extension, BIP 158 already defines custom filter
> types,
> and BIP 157 allows filters to be fetched in batch based on the block height
> and numerical range. The latter feature can later be modified to return a
> single composite filter rather than several individual filters.
>
> -- Laolu
>
>
> On Mon, Jun 4, 2018 at 7:28 AM Riccardo Casatta via bitcoin-dev <
> bitcoin-dev@lists.linuxfoundation.org> wrote:
>
>> I was wondering why this multi-layer multi-block filter proposal isn't
>> getting any comment,
>> is it because not asking all filters is leaking information?
>>
>> Thanks
>>
>> Il giorno ven 18 mag 2018 alle ore 08:29 Karl-Johan Alm via bitcoin-dev <
>> bitcoin-dev@lists.linuxfoundation.org> ha scritto:
>>
>>> On Fri, May 18, 2018 at 12:25 AM, Matt Corallo via bitcoin-dev
>>>  wrote:
>>> > In general, I'm concerned about the size of the filters making existing
>>> > SPV clients less willing to adopt BIP 158 instead of the existing bloom
>>> > filter garbage and would like to see a further exploration of ways to
>>> > split out filters to make them less bandwidth intensive. Some further
>>> > ideas we should probably play with before finalizing moving forward is
>>> > providing filters for certain script templates, eg being able to only
>>> > get outputs that are segwit version X or other similar ideas.
>>>
>>> There is also the idea of multi-block filters. The idea is that light
>>> clients would download a pair of filters for blocks X..X+255 and
>>> X+256..X+

Re: [bitcoin-dev] BIP 158 Flexibility and Filter Size

2018-06-04 Thread Riccardo Casatta via bitcoin-dev
I was wondering why this multi-layer multi-block filter proposal isn't
getting any comment,
is it because not asking all filters is leaking information?

Thanks

Il giorno ven 18 mag 2018 alle ore 08:29 Karl-Johan Alm via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org> ha scritto:

> On Fri, May 18, 2018 at 12:25 AM, Matt Corallo via bitcoin-dev
>  wrote:
> > In general, I'm concerned about the size of the filters making existing
> > SPV clients less willing to adopt BIP 158 instead of the existing bloom
> > filter garbage and would like to see a further exploration of ways to
> > split out filters to make them less bandwidth intensive. Some further
> > ideas we should probably play with before finalizing moving forward is
> > providing filters for certain script templates, eg being able to only
> > get outputs that are segwit version X or other similar ideas.
>
> There is also the idea of multi-block filters. The idea is that light
> clients would download a pair of filters for blocks X..X+255 and
> X+256..X+511, check if they have any matches and then grab pairs for
> any that matched, e.g. X..X+127 & X+128..X+255 if left matched, and
> iterate down until it ran out of hits-in-a-row or it got down to
> single-block level.
>
> This has an added benefit where you can accept a slightly higher false
> positive rate for bigger ranges, because the probability of a specific
> entry having a false positive in each filter is (empirically speaking)
> independent. I.e. with a FP probability of 1% in the 256 range block
> and a FP probability of 0.1% in the 128 range block would mean the
> probability is actually 0.001%.
>
> Wrote about this here: https://bc-2.jp/bfd-profile.pdf (but the filter
> type is different in my experiments)
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>


-- 
Riccardo Casatta - @RCasatta 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] BIP 158 Flexibility and Filter Size

2018-05-18 Thread Riccardo Casatta via bitcoin-dev
Another parameter which heavily affects filter size is the false positive
rate which is empirically set

to 2^-20
The BIP recall some go code

for how the parameter has been selected which I can hardly understand and
run, it's totally my fault but if possible I would really like more details
on the process, like charts and explanations (for example, which is the
number of elements to search for which the filter has been optimized for?)

Instinctively I feel 2^-20 is super low and choosing a lot higher alpha
will shrink the total filter size by gigabytes at the cost of having to
wastefully download just some megabytes of blocks.


2018-05-17 18:36 GMT+02:00 Gregory Maxwell via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org>:

> On Thu, May 17, 2018 at 3:25 PM, Matt Corallo via bitcoin-dev
>  wrote:
> > I believe (1) could be skipped entirely - there is almost no reason why
> > you'd not be able to filter for, eg, the set of output scripts in a
> > transaction you know about
>
> I think this is convincing for the txids themselves.
>
> What about also making input prevouts filter based on the scriptpubkey
> being _spent_?  Layering wise in the processing it's a bit ugly, but
> if you validated the block you have the data needed.
>
> This would eliminate the multiple data type mixing entirely.
> ___
> bitcoin-dev mailing list
> bitcoin-dev@lists.linuxfoundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev
>



-- 
Riccardo Casatta - @RCasatta 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Optimized Header Sync

2018-04-01 Thread Riccardo Casatta via bitcoin-dev
Yes, I think the checkpoints and the compressed headers streams should be
handled in chunks of 2016 headers and queried by chunk number instead of
height, falling back to current method if the chunk is not full yet.

This is cache friendly and allows to avoid bit 0 and bit 1 in the bitfield
(because they are always 1 after the first header in the chunk of 2016).

2018-03-30 8:14 GMT+02:00 Anthony Towns :

> On Thu, Mar 29, 2018 at 05:50:30PM -0700, Jim Posen via bitcoin-dev wrote:
> > Taken a step further though, I'm really interested in treating the
> checkpoints
> > as commitments to chain work [...]
>
> In that case, shouldn't the checkpoints just be every 2016 blocks and
> include the corresponding bits value for that set of blocks?
>
> That way every node commits to (approximately) how much work their entire
> chain has by sending something like 10kB of data (currently), and you
> could verify the deltas in each node's chain's target by downloading the
> 2016 headers between those checkpoints (~80kB with the proposed compact
> encoding?) and checking the timestamps and proof of work match both the
> old target and the new target from adjacent checkpoints.
>
> (That probably still works fine even if there's a hardfork that allows
> difficulty to adjust more frequently: a bits value at block n*2016 will
> still enforce *some* lower limit on how much work blocks n*2016+{1..2016}
> will have to contribute; so will still allow you to estimate how much work
> will have been done, it may just be less precise than the estimate you
> could
> generate now)
>
> Cheers,
> aj
>
>


-- 
Riccardo Casatta - @RCasatta 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


Re: [bitcoin-dev] Optimized Header Sync

2018-03-29 Thread Riccardo Casatta via bitcoin-dev
Hi Jim,

| version[i] = version[i - ((bit[3] << 2) + (bit[4] << 1) +
bit[5])]


Thought this wasn't effective in case overt asic boost get widely adopted,
but then I understood that at the moment only two bits of version get
scrambled by that technique so this looks fine, maybe add a comment about
this so the reader doesn't get the same initial doubt I got.

...downloading evenly spaced checkpoints throughout history (say every
> 1,000th) from all peers first...


My feeling is that encoding of the headers and checkpoints/parallel
download are separate subjects for two BIPS.
About the checkpoints I don't grasp why they are useful since an attacker
could lie about them but maybe I am missing something...

To take advantage of these possible savings, this document defines a
> variable-sized ''compressed encoding'' of block headers that occur in a
> range. Note that no savings are possible when serializing a single header;
> it should only be used for vectors of sequential headers. The full headers
> are reconstructed using data from previous headers in the range. The
> serialization begins with an ''encoding indicator'', which is a bitfield
> specifying how each field is serialized. The bits of the indicator have the
> following semantics:


Bitfield allows great savings, however the encoding depends on the headers
height a client ask for, this cause a little computational burden on the
node and the undesirable side effect of difficult caching. Variable length
encoding cause caching difficulties too...
A simpler approach could be to encode the headers in groups of 2016 headers
(the difficulty period) where the first header is complete and the others
2015 are missing the previous hash and the difficulty, this achieve
comparable savings ~45%, allows better caching and has fixed length
encoding. This could be useful for the node by caching headers on a single
file on disk and simply stream out the relative range when requested or to
serve the same encoded headers format in other context like http,
leveraging http caching infrastructure.



2018-03-28 1:31 GMT+02:00 Jim Posen via bitcoin-dev <
bitcoin-dev@lists.linuxfoundation.org>:

> Based on some ideas that were thrown around in this thread (https://lists.
> linuxfoundation.org/pipermail/bitcoin-dev/2017-December/015385.html), I
> have been working on a P2P extension that will allow faster header sync
> mechanisms. The one-sentence summary is that by encoding headers more
> efficiently (eg. omitting prev_hash) and downloading evenly spaced
> checkpoints throughout history (say every 1,000th) from all peers first, we
> could speed up header sync, which would be a huge improvement for light
> clients. Here is a draft of the BIP: https://github.com/jimpo/
> bips/blob/headers-sync/headersv2.mediawiki. The full text is below as
> well.
>
> I'd love to hear any feedback people have.
>
> --
>
> == Abstract ==
>
> This BIP describes a P2P network extension enabling faster, more reliable 
> methods for syncing the block header chain. New P2P messages are proposed as 
> more efficient replacements for getheaders and 
> headers during initial block download. The proposed header 
> download protocol reduces bandwidth usage by ~40%-50% and supports 
> downloading headers ranges from multiple peers in parallel, which is not 
> possible with the current mechanism. This also enables sync strategies with 
> better resistance to denial-of-service attacks.
>
> == Motivation ==
>
> Since 2015, optimized Bitcoin clients fetch all block headers before blocks 
> themselves in order to avoid downloading ones that are not part of the most 
> work chain. The protocol currently in use for fetching headers leaves room 
> for further optimization, specifically by compressing header data and 
> downloading more headers 
> simulaneouslyhttps://lists.linuxfoundation.org/pipermail/bitcoin-dev/2017-December/015385.html.
>  Any savings here should have a large impact given that both full nodes and 
> light clients must sync the header chain as a first step, and that the time 
> to validate and index the headers is negligible compared to the time spent 
> downloading them from the network. Furthermore, some current implementations 
> of headers syncing rely on preconfigured checkpoints to discourage attackers 
> attempting to fill up a victim's disk space with low-work headers. The 
> proposed messages enable sync strategies that are resilient against these 
> types of attacks. The P2P messages are designed to be flexible, supporting 
> multiple header sync strategies and leaving room for future innovations, 
> while also compact.
>
> == Definitions ==
>
> ''double-SHA256'' is a hash algorithm defined by two invocations of SHA-256: 
> double-SHA256(x) = SHA256(SHA256(x)).
>
> == Specification ==
>
> The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", 
> "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this 

[bitcoin-dev] "Compressed" headers stream

2017-08-28 Thread Riccardo Casatta via bitcoin-dev
Hi everyone,

the Bitcoin headers are probably the most condensed and important piece of
data in the world, their demand is expected to grow.

When sending a stream of continuous block headers, a common case in IBD and
in disconnected clients, I think there is a possible optimization of the
transmitted data:
The headers after the first could avoid transmitting the previous hash
cause the receiver could compute it by double hashing the previous header
(an operation he needs to do anyway to verify PoW).
In a long stream, for example 2016 headers, the savings in bandwidth are
about 32/80 ~= 40%
without compressed headers 2016*80=161280 bytes
with compressed headers 80+2015*48=96800 bytes

What do you think?


In OpenTimestamps calendars we are going to use this compression to give
lite-client a reasonable secure proofs (a full node give higher security
but isn't feasible in all situations, for example for in-browser
verification)
To speed up sync of a new client Electrum starts with the download of a file
 ~36MB containing the
first 477637 headers.
For this kind of clients could be useful a common http API with fixed
position chunks to leverage http caching. For example /headers/2016/0
returns the headers from the genesis to the 2015 header included while
/headers/2016/1 gives the headers from the 2016th to the 4031.
Other endpoints could have chunks of 20160 blocks or 201600 such that with
about 10 http requests a client could fast sync the headers


-- 
Riccardo Casatta - @RCasatta 
___
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev