Hello, Zhou

>On 12/02/2015 03:24 PM, Dave Young wrote:
>> Hi,
>>
>> On 12/02/15 at 01:29pm, "Zhou, Wenjian/周文剑" wrote:
>>> I think there is no problem if other test results are as expected.
>>>
>>> --num-threads mainly reduces the time of compressing.
>>> So for lzo, it can't do much help at most of time.
>>
>> Seems the help of --num-threads does not say it exactly:
>>
>>    [--num-threads THREADNUM]:
>>        Using multiple threads to read and compress data of each page in 
>> parallel.
>>        And it will reduces time for saving DUMPFILE.
>>        This feature only supports creating DUMPFILE in kdump-comressed 
>> format from
>>        VMCORE in kdump-compressed format or elf format.
>>
>> Lzo is also a compress method, it should be mentioned that --num-threads only
>> supports zlib compressed vmcore.
>>
>
>Sorry, it seems that something I said is not so clear.
>lzo is also supported. Since lzo compresses data at a high speed, the
>improving of the performance is not so obvious at most of time.
>
>> Also worth to mention about the recommended -d value for this feature.
>>
>
>Yes, I think it's worth. I forgot it.

I saw your patch, but I think I should confirm what is the problem first.

>However, when "-d 31" is specified, it will be worse.
>Less than 50 buffers are used to cache the compressed page.
>And even the page has been filtered, it will also take a buffer.
>So if "-d 31" is specified, the filtered page will use a lot
>of buffers. Then the page which needs to be compressed can't
>be compressed parallel.

Could you explain why compression will not be parallel in more detail ?
Actually the buffers are used also for filtered pages, it sounds inefficient.
However, I don't understand why it prevents parallel compression.

Further, according to Chao's benchmark, there is a big performance
degradation even if the number of thread is 1. (58s vs 240s)
The current implementation seems to have some problems, we should
solve them.


Thanks,
Atsushi Kumagai
_______________________________________________
kexec mailing list
[email protected]
http://lists.infradead.org/mailman/listinfo/kexec

Reply via email to