Re: Q: Error: mount_mfs: mmap: Cannot allocate memory

2023-02-15 Thread Crystal Kolipe
On Wed, Feb 15, 2023 at 03:10:08PM +0100, Why 42? The lists account. wrote:
> However, I also tried testing the same two filesystems using the
> "Flexible IO Tester" or fio (it's available as a package). When I used it
> to do random 4K reads and writes, I appear to have the opposite result:

...

> I wonder why that would be?

For a start, I would test using something other than /dev/zero as the data
source.

It's entirely possible that the firmware on an SSD would special case writing
a block that contains only 0x00 bytes.

In that case, and assuming that the filesystem block boundaries align with
the SSD's own internal flash block layout, the SSD would only need to update
it's metadata to point those LBA blocks to an internal 'zero' block.

This would virtually eliminate the overhead of actually writing to the flash,
and allow it to accept data from the host at a much faster speed.

As soon as you write a single non-0x00 byte, the drive would have to do a
propper write to the main flash memory and not just the area which contains
it's internal LBA to flash block mapping, (which may also be write-cached).

Depending on the state of the SSD, (recently secerased, used with another
OS which supports TRIM, alignment of the filesystem blocks with the raw
flash blocks, etc, etc), this could mean either a write, or a
read-erase-write cycle.

Using /dev/zero as the source definitely makes it a synthetic benchmark.



Re: Q: Error: mount_mfs: mmap: Cannot allocate memory

2023-02-15 Thread Why 42? The lists account.


On Mon, Feb 13, 2023 at 01:50:13PM -, Stuart Henderson wrote:
> ...
> It maybe worth checking whether mfs is actually helping -
> it's easy to assume that because it's in RAM it must be fast,
> but I've had machines where mfs was slower than SSD
> (https://marc.info/?l=openbsd-misc=164942119618029=2),
> also it's taking memory that could otherwise be used by
> buffer cache.

Hi All,

Since you mentioned it, I thought I would retry your dd test ...

# mount | grep /tmp
mfs:15266 on /tmp type mfs (asynchronous, local, nodev, nosuid, size=16777216 
512-blocks)

% cd !$ ; for i in `jot 5`; do dd if=/dev/zero of=mfs bs=1m count=990 2>&1 | 
grep bytes; done
cd /tmp/dd_test ; for i in `jot 5`; do dd if=/dev/zero of=mfs bs=1m count=990 
2>&1 | grep bytes; done
1038090240 bytes transferred in 1.376 secs (754215208 bytes/sec)
1038090240 bytes transferred in 1.189 secs (872536649 bytes/sec)
1038090240 bytes transferred in 1.227 secs (845718432 bytes/sec)
1038090240 bytes transferred in 1.186 secs (874866632 bytes/sec)
1038090240 bytes transferred in 1.254 secs (827186370 bytes/sec)

# mount | grep /fast
/dev/sd1l on /fast type ffs (local, nodev, nosuid, softdep)
# dmesg | grep sd1
sd1 at scsibus2 targ 1 lun 0: 
...

% cd /fast/dd_test ; for i in `jot 5`; do dd if=/dev/zero of=fast bs=1m 
count=990 2>&1 | grep bytes; done 
1038090240 bytes transferred in 0.871 secs (1191076597 bytes/sec)
1038090240 bytes transferred in 0.635 secs (1633246669 bytes/sec)
1038090240 bytes transferred in 0.615 secs (1685529408 bytes/sec)
1038090240 bytes transferred in 0.605 secs (1714639562 bytes/sec)
1038090240 bytes transferred in 0.612 secs (1694489764 bytes/sec)


So it seems that the Samsung NVMe device is much faster ...

However, I also tried testing the same two filesystems using the
"Flexible IO Tester" or fio (it's available as a package). When I used it
to do random 4K reads and writes, I appear to have the opposite result:

fio --name=rand_mmap_r+w --directory=/tmp/fio_test --rw=randrw --blocksize=4k 
--size=6g --io_size=60g --runtime=600 --ioengine=psync --fsync=1 --thread 
--numjobs=1 --group_reporting
...
Run status group 0 (all jobs):
   READ: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=30.0GiB 
(32.2GB), run=236394-236394msec
  WRITE: bw=130MiB/s (136MB/s), 130MiB/s-130MiB/s (136MB/s-136MB/s), io=30.0GiB 
(32.2GB), run=236394-236394msec

% fio --name=rand_mmap_r+w --directory=/fast/fio_test --rw=randrw 
--blocksize=4k --size=6g --io_size=60g --runtime=600 --ioengine=psync --fsync=1 
--thread --numjobs=1 --group_reporting
...
Run status group 0 (all jobs):
   READ: bw=34.8MiB/s (36.5MB/s), 34.8MiB/s-34.8MiB/s (36.5MB/s-36.5MB/s), 
io=20.4GiB (21.9GB), run=60-60msec
  WRITE: bw=34.8MiB/s (36.4MB/s), 34.8MiB/s-34.8MiB/s (36.4MB/s-36.4MB/s), 
io=20.4GiB (21.9GB), run=60-60msec

I wonder why that would be?

Disclaimer: I know almost nothing about fio, I've never used it before.
In particular, it isn't clear to me what the correct/best choice is for
the "ioengine" option. (I played around with a few different settings,
that's why you can see that "mmap" in the (test)name argument.)

This is on a 8th generation i5 Intel NUC running a recent snapshot: 7.2
GENERIC.MP#1049

The CPU has 4 cores, hyperthreading is off. The underlying device for
"/fast" is a Samsung M.2 NVMe "stick":
nvme0: Samsung SSD 970 EVO Plus 500GB, firmware 1B2QEXM7 ...

The full output from fio is included below for anyone who might be
interested ...

Cheers,
Robb.


fio --name=rand_mmap_r+w --directory=/tmp/fio_test --rw=randrw --blocksize=4k 
--size=6g --io_size=60g --runtime=600 --ioengine=psync --fsync=1 --thread 
--numjobs=1 --group_reporting
rand_mmap_r+w: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 
4096B-4096B, ioengine=psync, iodepth=1
fio-3.33
Starting 1 thread
rand_mmap_r+w: Laying out IO file (1 file / 6144MiB)
Jobs: 1 (f=1): [m(1)][100.0%][r=134MiB/s,w=134MiB/s][r=34.3k,w=34.2k IOPS][eta 
00m:00s]
rand_mmap_r+w: (groupid=0, jobs=1): err= 0: pid=669956672: Wed Feb 15 13:52:03 
2023
  read: IOPS=33.3k, BW=130MiB/s (136MB/s)(30.0GiB/236394msec)
clat (nsec): min=1523, max=1504.6k, avg=5387.11, stdev=1201.82
 lat (nsec): min=1580, max=1504.7k, avg=5450.15, stdev=1203.46
clat percentiles (nsec):
 |  1.00th=[ 3632],  5.00th=[ 4576], 10.00th=[ 4832], 20.00th=[ 5024],
 | 30.00th=[ 5152], 40.00th=[ 5280], 50.00th=[ 5344], 60.00th=[ 5472],
 | 70.00th=[ 5600], 80.00th=[ 5792], 90.00th=[ 5984], 95.00th=[ 6176],
 | 99.00th=[ 6496], 99.50th=[ 6688], 99.90th=[13376], 99.95th=[18048],
 | 99.99th=[26240]
   bw (  KiB/s): min=126573, max=144312, per=100.00%, avg=133298.71, 
stdev=2476.36, samples=472
   iops: min=31643, max=36078, avg=33324.48, stdev=619.06, samples=472
  write: IOPS=33.2k, BW=130MiB/s (136MB/s)(30.0GiB/236394msec); 0 zone resets
clat (usec): min=3, max=1549, avg=13.84, stdev= 2.06
 lat (usec): min=3, max=1549, avg=13.92, stdev= 2.07
clat 

Re: Q: Error: mount_mfs: mmap: Cannot allocate memory

2023-02-13 Thread Stuart Henderson
On 2023-02-12, Why 42? The lists account.  wrote:
>
> You're exactly right. With this entry in fstab:
>> swap /tmp mfs rw,nodev,nosuid,-s=4194304 0 0 
>
> I now have this /tmp space:
>> mjoelnir:~ 12.02 13:15:07 % df -h
>> Filesystem SizeUsed   Avail Capacity  Mounted on
>> /dev/sd1a 1005M537M418M57%/
>> mfs:67535  1.9G   29.0K1.8G 1%/tmp
>> ...
>
> That's right after a reboot. I'll start Chrome now and it can really chow
> down on some /tmp space :-)

It maybe worth checking whether mfs is actually helping -
it's easy to assume that because it's in RAM it must be fast,
but I've had machines where mfs was slower than SSD
(https://marc.info/?l=openbsd-misc=164942119618029=2),
also it's taking memory that could otherwise be used by
buffer cache.

The main benefit to me from mfs is for things which I explicitly
don't want to hit permanent storage.




Re: Q: Error: mount_mfs: mmap: Cannot allocate memory

2023-02-12 Thread Crystal Kolipe
On Sun, Feb 12, 2023 at 01:28:04PM +0100, Why 42? The lists account. wrote:
> 
> On Sun, Feb 05, 2023 at 02:50:44PM -0300, Crystal Kolipe wrote:
> > On Sun, Feb 05, 2023 at 06:05:22PM +0100, Why 42? The lists account. wrote:
> > ...
> > > The fstab file contains this mount entry for tmp:
> > > swap /tmp mfs rw,nodev,nosuid,-s=16777216 0 0
> > 
> > This is 8 Gb, which exceeds the default value for datasize for the daemon
> > class in /etc/login.conf.
> > 
> > Have you changed /etc/login.conf from the default?
> > 
> > > Did MFS filesystems go away, or have I screwed something up?
> > 
> > You've screwed something up :).
> 
> You're exactly right. With this entry in fstab:
> > swap /tmp mfs rw,nodev,nosuid,-s=4194304 0 0 
> 
> I now have this /tmp space:
> > mjoelnir:~ 12.02 13:15:07 % df -h
> > Filesystem SizeUsed   Avail Capacity  Mounted on
> > /dev/sd1a 1005M537M418M57%/
> > mfs:67535  1.9G   29.0K1.8G 1%/tmp
> > ...

If you've got plenty of physical RAM, you can always increase the datasize in
login.conf and keep your original 8 Gb mfs ramdisk rather than reducing it.

Not sure if that was clear from my original reply :).



Re: Q: Error: mount_mfs: mmap: Cannot allocate memory

2023-02-12 Thread Why 42? The lists account.


On Sun, Feb 05, 2023 at 02:50:44PM -0300, Crystal Kolipe wrote:
> On Sun, Feb 05, 2023 at 06:05:22PM +0100, Why 42? The lists account. wrote:
> ...
> > The fstab file contains this mount entry for tmp:
> > swap /tmp mfs rw,nodev,nosuid,-s=16777216 0 0
> 
> This is 8 Gb, which exceeds the default value for datasize for the daemon
> class in /etc/login.conf.
> 
> Have you changed /etc/login.conf from the default?
> 
> > Did MFS filesystems go away, or have I screwed something up?
> 
> You've screwed something up :).

You're exactly right. With this entry in fstab:
> swap /tmp mfs rw,nodev,nosuid,-s=4194304 0 0 

I now have this /tmp space:
> mjoelnir:~ 12.02 13:15:07 % df -h
> Filesystem SizeUsed   Avail Capacity  Mounted on
> /dev/sd1a 1005M537M418M57%/
> mfs:67535  1.9G   29.0K1.8G 1%/tmp
> ...

That's right after a reboot. I'll start Chrome now and it can really chow
down on some /tmp space :-)

Thanks!

Cheers,
Robb.



Re: Q: Error: mount_mfs: mmap: Cannot allocate memory

2023-02-05 Thread Crystal Kolipe
On Sun, Feb 05, 2023 at 06:05:22PM +0100, Why 42? The lists account. wrote:
> mount_mfs: mmap: Cannot allocate memory

...

> The fstab file contains this mount entry for tmp:
> swap /tmp mfs rw,nodev,nosuid,-s=16777216 0 0

This is 8 Gb, which exceeds the default value for datasize for the daemon
class in /etc/login.conf.

Have you changed /etc/login.conf from the default?

> Did MFS filesystems go away, or have I screwed something up?

You've screwed something up :).



Q: Error: mount_mfs: mmap: Cannot allocate memory

2023-02-05 Thread Why 42? The lists account.


Hi All,

After an update to a recent snapshot on my desktop system, I noticed
these mount_mfs messages at boot time:

/dev/sd0h (7a1775fef773535e.h): file system is clean; not checking /dev/sd1j
(281ef747da03afe7.j): file system is clean; not checking
/dev/sd1k (281ef747da03afe7.k): file system is clean; not checking
/dev/sd1l (281ef747da03afe7.l): file system is clean; not checking
/dev/sd2c (67c92dad63883338.c): file system is clean; not checking
mount_mfs: mmap: Cannot allocate memory
kbd: keyboard mapping set to de.nodead
keyboard.encoding -> de.nodead
pf enabled
kern.maxproc: 1310 -> 4000
kern.maxthread: 2620 -> 8000
kern.maxfiles: 7030 -> 16000
ddb.panic: 1 -> 0
kern.allowdt: 0 -> 1
starting network
reordering: ld.so libc libcrypto sshd.
starting early daemons: syslogd pflogd ntpd.
starting RPC daemons: portmap mountd nfsd lockd statd.
mount_mfs: mmap: Cannot allocate memory
savecore: no core dump
checking quotas: done.
clearing /tmp
kern.securelevel: 0 -> 1
creating runtime link editor directory cache.
preserving editor files.
running rc.sysmerge
starting network daemons: sshd sndiod.
running rc.firsttime
fw_update: added none; updated none; kept intel,inteldrm,vmm
starting package daemons: messagebus postfix smartd pcscd avahi_daemon.
starting local daemons: sensorsd cron xenodm.

The fstab file contains this mount entry for tmp:
swap /tmp mfs rw,nodev,nosuid,-s=16777216 0 0

I don't know when this first occurred. I first noticed it when I was
investigating why chrome had started to log "filesystem full" messages:
e.g. "/: write failed, file system is full.".

Since the mfs mount of /tmp failed, it's now using the root fs as /tmp
space, which doesn't have much free space.

I'm currently running: OpenBSD mjoelnir.fritz.box 7.2 GENERIC.MP#1012 amd64

Did MFS filesystems go away, or have I screwed something up?

Cheers,
Robb.