Here are the latest results on compression ratios for the first 295,000
blocks, compressionlevel=6. I think there are more than enough
datapoints for statistical significance.
Results are very much similar to the previous test. I'll work on
getting a comparison between how much time
If that were true then we wouldn't need to gzip large files before
sending them over the internet. Data compression generally helps
transmission speed as long as the amount of compression is high enough
and the time it takes is low enough to make it worthwhile. On a
corporate LAN it's generally
Data compression adds latency and reduces predictability, so engineers have
decided to leave compression to application layers instead of transport layer
or lower in order to let the application designer decide what tradeoffs to make.
On Nov 11, 2015, at 10:49 AM, Marco Pontello via bitcoin-dev
On 10/11/2015 8:11 AM, Peter Tschipper wrote:
> On 10/11/2015 1:44 AM, Tier Nolan via bitcoin-dev wrote:
>> The network protocol is not quite consensus critical, but it is
>> important.
>>
>> Two implementations of the decompressor might not be bug for bug
>> compatible. This (potentially) means
On 10/11/2015 8:46 AM, Jeff Garzik via bitcoin-dev wrote:
> Comments:
>
> 1) cblock seems a reasonable way to extend the protocol. Further
> wrapping should probably be done at the stream level.
agreed.
>
> 2) zlib has crappy security track record.
>
Zlib had a bad buffer overflow bug but that
I think 25% bandwidth savings is certainly considerable, especially for
people running full nodes in countries like Australia where internet
bandwidth is lower and there are data caps.
I absolutely would not dismiss 25% compression. gzip and bzip2 compression
is relatively standard, and I'd
I would expect that since a block contains mostly hashes and crypto signatures,
it would be almost totally incompressible. I just calculated compression
ratios:
zlib-15%(file is LARGER)
gzip 28%
bzip225%
So zlib compression is right out. How much is ~25% bandwidth savings