Hi, I tested vacuum_mem setting under a 4CPU and 4G RAM machine. I am the only person on that machine.
The table: tablename | size_kb | reltuples ---------------------------+------------------------- big_t | 2048392 | 7.51515e+06 Case 1: 1. vacuum full big_t; 2. begin; update big_t set email = lpad('a', 255, 'b'); rollback; 3. set vacuum_mem=655360; -- 640M 4. vacuum big_t; It takes 1415,375 ms Also from top, the max SIZE is 615M while SHARE is always 566M PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND 5914 postgres 16 0 615M 615M 566M D 7.5 15.8 21:21 postgres: postgres mydb xxx.xxx.xxx.xxx:34361 VACUUM Case 2: 1. vacuum full big_t; 2. begin; update big_t set email = lpad('a', 255, 'b'); rollback; 3. set vacuum_mem=65536; -- 64M 4. vacuum big_t; It takes 1297,798 ms Also from top, the max SIZE is 615M while SHARE is always 566M PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND 3613 postgres 15 0 615M 615M 566M D 17.1 15.8 9:04 postgres: postgres mydb xxx.xxx.xxx.xxx:34365 VACUUM It seems vacuum_mem does not have performance effect at all. In reality, we vaccum nightly and I want to find out which vacuum_mem value is the best to short vacuum time. Any thoughts? Thanks, __________________________________ Do you Yahoo!? New and Improved Yahoo! Mail - Send 10MB messages! http://promotions.yahoo.com/new_mail ---------------------------(end of broadcast)--------------------------- TIP 9: the planner will ignore your desire to choose an index scan if your joining column's datatypes do not match