On 08/19/2013 10:20 AM, Johannes Sixt wrote:
> Am 19.08.2013 08:38, schrieb Steffen Prohaska:
>> +test_expect_success EXPENSIVE 'filter large file' '
>> +    git config filter.largefile.smudge cat &&
>> +    git config filter.largefile.clean cat &&
>> +    for i in $(test_seq 1 2048); do printf "%1048576d" 1; done >2GB &&
> Shouldn't you count to 2049 to get a file that is over 2GB?

Would it be possible to offload the looping from shell to a real
program? So for example
        truncate -s 2049M <filename>
should do the job. That would create a file reading all bytes as zeros  
being larger as 2G. If truncate is not available, what about dd?


Attachment: signature.asc
Description: OpenPGP digital signature

Reply via email to