> From: Richard Daemon [mailto:[EMAIL PROTECTED]
>
> On Jan 19, 2008 8:31 PM, Schvberle Daniel
> <[EMAIL PROTECTED]> wrote:
>
>
>       Hi all!
>
>       I've just upgraded my firewall from OpenBSD 4.0 to
> 4.2-stable and ran
>       into a small problem regarding mount_mfs. I solved it,
> but in case
>       anybody else runs into it, here's something for the archives.
>
>       I run the box from a 512MB CF and, originally, with very limited
>       memory. The /var, tmp and /dev are mount_mfs and during
> the upgrade I
>       had trobule with mounting /dev.
>
>       I used to mount /dev with the following line:
>
>       swap /dev mfs rw,-P=/proto/dev,-s=700,-i=256 0 0
>
>       It seems that sometime after 4.1 was released (probably
> during ffs2
>       development) mount_mfs was changed in such a way that
> it doesn't allow
>       very high density for inodes. This resulted in
> mount_mfs failing on
>       replicating the /dev and me getting a readonly /dev,
> which resulted
>       in a box that I couldn't login into remotly (with ssh).
> Luckily you I
>       could still issue commands with winscp or login
> locally. After couple
>       of tests I concluded that mount_mfs simply ignores
> density settings
>       lower than 1024, so I changed the /dev to settings to
> the following
>       line:
>
>       swap /dev mfs rw,-P=/proto/dev,-s=4000,-i=1024 0 0
>
>       Now everything is ok, I'm happy and sice CF is in a new
> box with lots
>       of memory I'm not trying to squeeze every byte out of it.
>
>       Maybe this maximal density could be documented somehow?
> I glanced at
>       the mkfs.c and saw that, in theory, it should warn the user when
>       reducing the density but I never got a warning during my tests.
>
>       dmesg in case anybody needs it:

<snip dmesg>

>
> Wow, very weird that you post this. I just noticed the exact
> same thing yesterday too. Upgraded from 4.0-stable to
> 4.2-stable on a WRAP (pcengines.ch) box with my 512M CF and
> /dev entries failing as well. My previous inode settings used to be:
>
> swap /dev mfs rw,-P=/.devtmp,-s=1200,-i=128 0 0 but that
> crapped out in 4.2.
>
> I changed it to -s=3072, -i=128 just to get it fully working
> properly and I haven't looked into it further yet, but
> wondering if I'm better off maybe trying higher inode (like
> yours) but lower MFS size such as -s=1024 because I'm limited
> in memory (128M total). Other than that, is an MFS /dev size
> bigger than 1M even needed? I'd really like to reduce as much
> as possible.
>
> Thanks for the post!
>
> I'm new to this mailing list and so far, it's great!

No, I don't think you'd ever need a /dev this big, but in order to
get the needed number of inodes you have to push the size up.
Your line is ok, but maybe you should put i=1024 instead of i=128,
so you know what the real values are - that's what it's using anyway
With 128MB you really shouldn'y worry. I was concerned because I had
only 32MB or 48MB. mount_mfs doesn't really use the memory untill
it's needed, so you could make, say 100GB mfs on a box with 128MB of
RAM and it would work as long as you've got memory to hold the
files. Regardnig /dev, you really don't need much as it's a small
filesystem, but sometimes you can get real files in there. This is
what happend once to my lil' box (I had a _real_ /dev/null) and it
crapped out because it ran out of memory. After that I reduced the
/dev as much as I could, I didn't want another local DoS to happen.
I have 512MB now and couldn't care less if /dev is 0.1 or 1 MB,
and with 128MB you shouldn't either, especially since it gets
allocated only if really needed by the files.

Reply via email to