Edward Shishkin wrote:
> Clemens Eisserer wrote:
>>> But speaking of single threadedness, more and more desktops are
>>> shipping
>>> with ridiculously more power than people need.  Even a gamer really
>>
>> Will the LZO compression code in reiser4 be able to use
>> multi-processor systems?
>> E.g. if I've a Turion-X2 in my laptop will it use 2 threads for
>> compression/decompression making cpu throughput much better than
>> whatthe disk could do?
>>
>
> Compression is going in flush time and there can be more then
> one flush thread that processes the same transaction atom.
> Decompression is going in the context of readpage/readpages.
> So if you mean per file, then yes for compression and no for
> decompression.
I don't think your explanation above is a good one.

If there is more than one process reading a file, then you can have
multiple decompressions at one time of the same file, yes?

Just because there can be more than one flush thread per file does not
mean it is likely there will be.

CPU scheduling of compression/decompression is an area that could use
work in the future.    For now, just understand that what we do is
better than doing nothing.;-/
>
> Edward.
>
>
>> lg Clemens
>>
>>
>> 2006/8/30, Hans Reiser <[EMAIL PROTECTED]>:
>>
>>> Edward Shishkin wrote:
>>> >
>>> > (Plain) file is considered as a set of logical clusters (64K by
>>> > default). Minimal unit occupied in memory by (plain) file is one
>>> > page. Compressed logical cluster is stored on disk in so-called
>>> > "disk clusters". Disk cluster is a set of special items (aka
>>> "ctails",
>>> > or "compressed bodies"), so that one block can contain (compressed)
>>> > data of many files and everything is packed tightly on disk.
>>> >
>>> >
>>> >
>>> So the compression unit is 64k for purposes of your benchmarks.
>>>
>>
>>
>
>
>

Reply via email to