Ibrahim Harrani wrote:
Hi Craig,
Here is the result. It seems that disk write is terrible!.
r...@myserver /usr]# time (dd if=/dev/zero of=bigfile bs=8192
count=1000000; sync)
1000000+0 records in
1000000+0 records out
8192000000 bytes transferred in 945.343806 secs (8665630 bytes/sec)
real 15m46.206s
user 0m0.368s
sys 0m15.560s
So it's nothing to do with Postgres. I'm no expert solving this sort of
problem, but I'd start by looking for:
- a rogue process that's using disk bandwidth (use vmstat when the system is
idle)
- system logs, maybe there are a zillion error messages
- if you have a second disk, try its performance
- if you don't have a second disk, buy one, install it, and try it
- get another SATA controller and try that
Or do the reverse: Put the disk in a different computer (one that you've tested
beforehand and verified is fast) and see if the problem follows the disk. Same
for the SATA card.
It could be your SATA controller, the disk, some strange hdparm setting ... who
knows?
I ran into this once a LONG time ago with a kernal that didn't recognize the
disk or driver or something, and disabled the DMA (direct-memory access)
feature, which meant the CPU had to handle every single byte coming from the
disk, which of course meant SLOW, plus you couldn't even type while the disk
was busy. A simple manual call to hdparm(1) to force DMA on fixed it. Weird
stuff like that can be very hard to find.
I also saw very low write speed once on a RAID device with a battery-backed
cache, when the battery went dead. The RAID controller went into its
conservative mode, which for some reason was much slower than the disk's raw
performance.
Craig
--
Sent via pgsql-performance mailing list (pgsql-performance@postgresql.org)
To make changes to your subscription:
http://www.postgresql.org/mailpref/pgsql-performance