In response to mike :
> Hi All,
>
> I have a poor performance SQL as following. The table has about 200M
> records, each employee have average 100 records. The query lasts about
> 3 hours. All I want is to update the flag for highest version of each
> client's record. Any suggestion is welcome!
>
Hi All,
I have a poor performance SQL as following. The table has about 200M
records, each employee have average 100 records. The query lasts about
3 hours. All I want is to update the flag for highest version of each
client's record. Any suggestion is welcome!
Thanks,
Mike
SQL===
Bob Branch wrote:
> Is there a reference I can look at that will give me a description
> of how to determine sensible values for settings like
> shared_buffers, effective_cache_size, etc. that I see discussed on
> this list and elsewhere?
You might start with these links:
http://wiki.postgre
Hello!
We've just installed a couple new servers that will be our new database
servers. The basic hardware stats of each are:
2 x Xeon E5620 (2.4GHz quad-core)
32GB DDR3 1333 RAM
6 x 600GB 15krpm SAS drives - RAID 10
Perc 6/i RAID controller with battery backup
I've lurked on this list for a
That depends on your application's requirements. If a transaction on table X
fails, do you still want the history (noting the failure)? If so, go with
embedding the code in your script. If you only want history for successful
transactions, a trigger will take care of that for you automaticall