Hello Dan,

Tuesday, April 17, 2007, 10:59:53 PM, you wrote:

DM> Robert Milkowski wrote:
>> Hello Dan,
>> 
>> Tuesday, April 17, 2007, 9:44:45 PM, you wrote:
>> 
>>>>> How can this work?  With compressed data, its hard to predict its 
>>>>> final size before compression. 
>>>> Because you are NOT compressing the file only compressing the blocks as 
>>>> they get written to disk.
>> 
>> DM> I guess this implies that the compression only can save integral numbers 
>> of
>> DM> blocks.
>> 
>> Can you clarify please?
>> I don't understand above....

DM> If compression is done block-wise, then if I compress a 512-byte block to 2
DM> bytes, I still need a 512-byte block to store it.

DM> Similarly, if I compress 1000 blocks to 999.001 blocks, I still need 1000
DM> blocks to store them.

DM> This is not a significant problem, I'm sure, but it's worth remembering.
DM> Many tiny files probably don't benefit from compression at all, rather than
DM> "only a little".

Yep, that's true. As smallest block in zfs is 512...

But there's one exception - if you're creating small files (and also
large one) fill with 0s then you will gain storage even if each file
is less than 512B as no data block is allocated then :)


-- 
Best regards,
 Robert                            mailto:[EMAIL PROTECTED]
                                       http://milek.blogspot.com

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to