> On 29 Aug 2016, at 19:52, Junio C Hamano <gits...@pobox.com> wrote:
> Lars Schneider <larsxschnei...@gmail.com> writes:
>>> On 25 Aug 2016, at 21:17, Stefan Beller <sbel...@google.com> wrote:
>>>> On Thu, Aug 25, 2016 at 4:07 AM,  <larsxschnei...@gmail.com> wrote:
>>>> From: Lars Schneider <larsxschnei...@gmail.com>
>>>> Generate more interesting large test files
>>> How are the large test files more interesting?
>>> (interesting in the notion of covering more potential bugs?
>>> easier to debug? better to maintain, or just a pleasant read?)
>> The old large test file was 1MB of zeros and 1 byte with a one, repeated 
>> 2048 times.
>> Since the filter uses 64k packets we would test a large number of equally 
>> looking packets.
>> That's why I thought the pseudo random content is more interesting.
> I guess my real question is why it is not just a single invocation
> of test-genrandom that gives you the whole test file; if you are
> using 20MB, the simplest would be to grab 20MB out of test-genrandom.
> With that hopefully you won't see large number of equally looking
> packets, no?

True, but applying rot13 (via tr ...) on 20+ MB takes quite a bit of
time. That's why I came up with the 1M SP in between.

However, I realized that testing a large amount of data is not really
necessary for the final series. A single packet is 64k. A 500k pseudo random
test file should be sufficient. This will make the test way simpler.


Reply via email to