On Tue, Mar 01, 2016 at 07:07:58PM +0700, Robert Elz wrote: > | - 64 MB region at offset 498074652673 with many changes
> You have 500GB dives, right? So that is way out near the end. > What number(s) of sectors do the drives report (should be in dmesg) ? Yes, 500 GB disks, 1953525168 sectors. The offset above is around 463 GB. > What is in the labels for wd0 and wd1 (MBR probably, and the netbsd disklabel > if there is one, but GPT would be OK too - whatever you are using). > Note I mean the labels on the raw drives, not what is inside the raid > "device", that is irrelevant for this. Oh yes, you are right. I am looking at difference between swap partitions: # size offset fstype a: 972800000 2048 RAID b: 980723120 972802048 swap c: 1953523120 2048 unused d: 1953525168 0 unused Hence, the difference in the RAID partitions are: - 32 bits at the end of MBR bootstrap - one bit in RAIDframe structures But I still have no explanation why the kernel got corrupted and if that problem could be more widespread. RAIDframe is probably innocent there, though. -- Emmanuel Dreyfus [email protected]
