Folks, I know a number of people here run RHEL (or rebuilds such as CentOS) on their systems -- wanted to share a problem I've apparently had on my T410 / 500G hard drive since I installed RHEL 6. I reported it at https://bugzilla.redhat.com/show_bug.cgi?id=667485
Bottom line, I think a default RHEL 6 setting caused my hard drive to fail in less than 2 months -- adding 200,000 cycles in that time (as defined by the smartctl -a /dev/sda | grep Load_Cycle_Count command) When I ran normal recovery stuff on that drive, the problems were essentially limited to the partition with the RHEL 6 top-level root directory. If I understand correctly, 200,000 cycles shouldn't by itself cause a drive failure, but I'm guessing the way the cycles were focused on hte RHEL 6 partition contributed to the failure. Perhaps I'm misunderstanding something about hdparm and cycles, I don't know. I think a similar bug for Ubuntu was discussed here a while back, which I cited in the bugzilla. I've read some stuff that suggests the problem might be limited to certain hard drives (e.g. Hitachi), and is a problem on laptops because of power save schemes. Since I installed a new drive, I've had some success reducing the number of cycles with each of the following commands: hdparm -B 200 /dev/sda hdparm -B 254 /dev/sda Not sure which is better, but if you have RHEL 6 or a rebuild on your laptop, I'm guessing either should help. Thanks, Mike _______________________________________________ PLUG mailing list [email protected] http://lists.pdxlinux.org/mailman/listinfo/plug
