Re: Horrendous RAIDframe reconstruction performance

2020-06-28 Thread Greg Oster
On 6/28/20 7:31 PM, John Klos wrote: Any thoughts about what's going on here? Is this because the drives are 512e drives? Three weeks is a LONG time to reconstruct. So this turns out to be a failing drive. SMART doesn't show it's failing, but the one that's failing defaults to having the

Re: Horrendous RAIDframe reconstruction performance

2020-06-28 Thread John Klos
Any thoughts about what's going on here? Is this because the drives are 512e drives? Three weeks is a LONG time to reconstruct. So this turns out to be a failing drive. SMART doesn't show it's failing, but the one that's failing defaults to having the write cache off, and turning it on

Re: Horrendous RAIDframe reconstruction performance

2020-06-28 Thread Greg Oster
On 6/28/20 12:29 PM, Edgar Fuß wrote: That's the reconstruction algorithm. It reads each stripe and if it has a bad parity, the parity data gets rewritten. That's the way parity re-write works. I thought reconstruction worked differently. oster@? Reconstruction does not do the "read",

Re: Horrendous RAIDframe reconstruction performance

2020-06-28 Thread Edgar Fuß
> That's the reconstruction algorithm. It reads each stripe and if it > has a bad parity, the parity data gets rewritten. That's the way parity re-write works. I thought reconstruction worked differently. oster@?

Re: Horrendous RAIDframe reconstruction performance

2020-06-28 Thread Michael van Elst
j...@ziaspace.com (John Klos) writes: >Next, raidctl doesn't handle NAME= for device yet: >raidctl -v -a NAME=raid8tb1 raid0 >raidctl: ioctl (RAIDFRAME_ADD_HOT_SPARE) failed: No such file or directory Yes, so far, only the config file knows about NAME= syntax. >Finally, even though these are

Re: Horrendous RAIDframe reconstruction performance

2020-06-28 Thread Edgar Fuß
What is your stripe size? > [ 2.859768] dk0 at wd2: "raid8tb0", 15611274240 blocks at 1024, type: > raidframe At least the components seem to be properly aligned on the disc.

Horrendous RAIDframe reconstruction performance

2020-06-28 Thread John Klos
Hello, I'm setting up two helium, non-SMR, 512e 8 TB disks (HGST HUH728080ALE604) in a RAIDframe mirror: [ 2.829768] wd2 at atabus2 drive 0 [ 2.829768] wd2: [ 2.829768] wd2: drive supports 16-sector PIO transfers, LBA48 addressing [ 2.829768] wd2: 7452 GB, 15504021 cyl, 16