On Tue, 2008-01-15 at 14:56 +0200, Lars Noodén wrote: > I've got OpenBSD 4.2 on a net4801 and am mounting /dev in RAM. I'd like > to reduce the amount of memory used even further. Currently only %4 is > used: > Filesystem Size Used Avail Capacity Mounted on > mfs:21800 853K 34.0K 777K 4% /dev > > What should I be aware of in /etc/fstab to minimize the memory footprint > of /dev? Currently, I have: > swap /dev mfs rw,-P/dev.base,-i1,-s2300,-c264,-m5,nosuid 0 0
Couldn't resist .. here's ho far I got after about 45 mins. of testing: # mount_mfs -s 512 -b 32768 -f 32768 -P/var/dev -o nosuid swap /mnt/test/ # df /mnt/test Filesystem 512-blocks Used Avail Capacity Mounted on mfs:17962 192 128 64 67% /mnt/test/ # df -h /mnt/test Filesystem Size Used Avail Capacity Mounted on mfs:17962 96.0K 64.0K 32.0K 67% /mnt/test/ Note: '-i' option is ignored here - it doesn't make any noticeable difference at all. Block and fragment size seem to be the issues here. With these set to the defaults, I never got past 13%. You would only need to change the default bpi if you run out of inodes. Maintaining fragments appears to be pretty disk-intensive, I suspect that if (blk_siz == frag_siz (or whatever)) then it's turned off altogether. I couldn't reproduce Jean-Yves' 23.0K (mount_mfs -i 256 -s 592 -P /var/run/dev swap /dev): # mount_mfs -i 256 -s 512 -P/var/dev -o nosuid swap /mnt/test # df -h /mnt/test mfs:29060 143K 15.0K 121K 11% /mnt/test I suspect that the '-s' value 592 was a typo, since it was rejected over here: mount_mfs: reduced number of fragments per cylinder group from 72 to 64 to enlarge last cylinder group mount_mfs: inode table does not fit in cylinder group I can't find any hard data - apart from claims and statements - about dynamic growth (nor shrinkage!) of the MFS partition, but should any process hijack /dev e.g. as it's temp dir, then 96K (or 23, if possible), should be a significant improvement over the 853K in Lars' original posting, even if it's only a max. value. Bill -- "What's a computer?" - MES _______________________________________________ Soekris-tech mailing list [email protected] http://lists.soekris.com/mailman/listinfo/soekris-tech
