Lennart Poettering <lenn...@poettering.net> writes: > Hmm, this doesn't look right. here we choose the hash table sizes to > use for a file, and I doubt we should base this on the currently > available disk space, since sizing the hashtable will have an effect > on the entire lifetime of the file, during which the available disk > space might change wildly. > > I think it would be best not to do relative sizes for the journal file > max size at all, and continue to only support and absolute value for > that. > >> + >> +uint64_t size_parameter_evaluate(const SizeParameter *sp, uint64_t >> available) { >> + if (sp->value == (uint64_t) -1) >> + return (uint64_t) -1; >> + >> + if (sp->relative) >> + return sp->value * 0.01 * available; > > Hmm, so this implements this as percentage after all. as mentioned in > my earlier mail, I think this should be normalized to 2^32 instead, so > that 2^32 maps to 100%...
I realized that I got the patch wrong. What I really wanted was to take percentage values of *disk size*, not available space. Using disk size would make it constant. Having said that, is it ok to change even the options that you said were the bad idea? Also, does it really make sense to implement the relative values as a mapping as you have suggested? To me it really doesn't, since you can't take more than 100% of disk space is not possible (I don't really count thin LVs), and mapping to a huge interval is just not as readable as using percentage. What is the advantage of the mapping again? Sorry if I'm being thick. Cheers, -- Jan Synacek Software Engineer, Red Hat
signature.asc
Description: PGP signature
_______________________________________________ systemd-devel mailing list systemd-devel@lists.freedesktop.org http://lists.freedesktop.org/mailman/listinfo/systemd-devel