On 2021-02-27 03:35, Mark Schneider wrote:
Hi,


I face system crash on OpenBSD 6.8 when trying to use softraid RAID5
drive trying to write big files (like 10GBytes) to it.

I can reproduce the error (tested on two different systems with
OpenBSD 6.8 installed on an SSD drive or an USB stick). The RAID5
drive itself consist of six Samsung PRO 860 512GB SSDs.

In short:

bioctl -c 5 -l sd0a,sd1a,sd2a,sd3a,sd4a,sd5a softraid0

obsdssdarc# disklabel sd7
# /dev/rsd7c:
type: SCSI
disk: SCSI disk
label: SR RAID 5
duid: a50fb9a25bf07243
flags:
bytes/sector: 512
sectors/track: 255
tracks/cylinder: 511
sectors/cylinder: 130305
cylinders: 38379
total sectors: 5001073280
boundstart: 0
boundend: 5001073280
drivedata: 0

16 partitions:
#                size           offset  fstype [fsize bsize cpg]
  a:       5001073280                0  4.2BSD   8192 65536 52270
  c:       5001073280                0  unused

#
--------------------------------------------------------------------------------

obsdssdarc# time dd if=/dev/urandom of=/arc-ssd/1GB-urandom.bin bs=1M count=1024
1024+0 records in
1024+0 records out
1073741824 bytes transferred in 8.120 secs (132218264 bytes/sec)
    0m08.13s real     0m00.00s user     0m08.14s system

# Working as expected ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


obsdssdarc# time dd if=/dev/urandom of=/arc-ssd/10GB-urandom.bin
bs=10M count=1024

# Error messages

uvm_fault(0xffffffff821f5490, 0x40, 0, 1) -> e
kernel: page fault trap, code=0
Stopped at      sr_validate_io+0x44:    cmpl     $0,0x40(%r9)
ddb{2}>

# Crashing OpenBSD 6.8
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^


# After reboot:

obsdssdarc# mount /dev/sd7a /arc-ssd/
mount_ffs: /dev/sd7a on /arc-ssd: Device not configure

obsdssdarc# grep sd7 /var/run/dmesg.boot
softraid0: trying to bring up sd7 degraded
softraid0: sd7 was not shutdown properly
softraid0: sd7 is offline, will not be brought online


More details in attached files. Thanks a lot in advance for short feedback.


Kind regards

Mark

Hi,

I can't reproduce this on a resent snapshot:

```
temp# bioctl sd5
Volume      Status               Size Device
softraid0 1 Online      2356352188416 sd5     RAID5
          0 Online       785450747904 1:0.0   noencl <sd0d>
          1 Online       785450747904 1:1.0   noencl <sd1d>
          2 Online       785450747904 1:2.0   noencl <sd2d>
          3 Online       785450747904 1:3.0   noencl <sd3d>
temp# time dd if=/dev/urandom of=20GB-urandom.bin bs=10M count=2048
2048+0 records in
2048+0 records out
21474836480 bytes transferred in 1845.313 secs (11637500 bytes/sec)
   30m45.95s real     0m00.01s user     2m21.61s system
temp# ls -lh
total 41948288
-rw-r--r--  1 root  wheel  20.0G Mar  1 13:05 20GB-urandom.bin
```

```
OpenBSD 6.9-beta (GENERIC.MP) #366: Sun Feb 28 07:15:39 MST 2021
    [email protected]:/usr/src/sys/arch/amd64/compile/GENERIC.MP
real mem = 17134055424 (16340MB)
avail mem = 16599408640 (15830MB)
random: good seed from bootblocks
mpath0 at root
scsibus0 at mpath0: 256 targets
mainbus0 at root
bios0 at mainbus0: SMBIOS rev. 2.7 @ 0xeb5a0 (56 entries)
bios0: vendor American Megatrends Inc. version "2.3a" date 01/06/2021
bios0: Supermicro X9SCL/X9SCM
```

Reply via email to