John O'Connor wrote:
Hi,
I am getting some strange errors from an openBSD system that I am
using as a backup server.
I transfer some files onto the system via ftp. (1260 files with a
total size of 60G.)
The transfer works OK and then I try to check the newly arrived
files.
The last file in the set is an md5 file of the whole set.
I now try:
sum -c abc.md5 > result.txt
I get an error.
First, I see a number of IO errors and then finally a message that
the file system is full. (It is not.)
sum: abc.md5: read error: Input/output error
/home2: write failed, file system is full
So, I power the system down and then try to reboot. I get the same
error and eventually, the system refuses to reboot claiming that
one block on the disk cannot be read.
I then moved the disk to a Win2K machine and ran a SMART monitor on
the drive. The monitor reported that the drive was perfect - no
bad sectors and no read errors.
What can be going on here?
It does not look like a disk error - surely SMART would notice it
if it was?
It does not look like a hardware error elsewhere in the system - I
plugged in another drive and the checksums on that drive all
turned out OK.
I`m a bit stuck here. Any suggestions welcome.
The system is based on a Gigabyte EP35 board and the drive is an
almost new Samsung 1.5TB model. It is split into two partitions,
first - where I see the error - is 820G and the second is the
rest. I have tried ffs and ffs2 with the same result.
jpoc
SMART is not the final word.
Try running badblocks from e2fsprogs.
Be sure you use it correctly. You will need the partitions unmounted for it
--
A human being should be able to change a diaper, plan an invasion,
butcher a hog, conn a ship, design a building, write a sonnet, balance
accounts, build a wall, set a bone, comfort the dying, take orders,
give orders, cooperate, act alone, solve equations, analyze a new
problem, pitch manure, program a computer, cook a tasty meal, fight
efficiently, die gallantly. Specialization is for insects.
-- Robert Heinlein