Does anyone have any white papers or basic guides for a large RAM
server?
We are consolidating two databases to enable better data-mining that
currently run on a 4 GB and 2 GB machine. The data issues on the 4
GB machine are numerous, things like create index fail and update
queries
Alex Hochberger wrote:
Does anyone have any white papers or basic guides for a large RAM server?
We are consolidating two databases to enable better data-mining that
currently run on a 4 GB and 2 GB machine. The data issues on the 4 GB
machine are numerous, things like create index fail and
It's not on rebuilding the index, it's on CREATE INDEX.
I attribute it to wrong setting, Ubuntu bizarre-ness, and general
problems.
We need new hardware, the servers are running on aging
infrastructure, and we decided to get a new system that will last us
the next 3-4 years all at once.
On Nov 29, 2007, at 2:15 PM, Richard Huxton wrote:
Alex Hochberger wrote:
Problem Usage: we have a 20GB table with 120m rows that we are
splitting into some sub-tables. Generally, we do large data pulls from
here, 1 million - 4 million records at a time, stored in a new table for
Alex,
The new machine will have 48 GB of RAM, so figuring out starting
points for the Shared Buffers and Work_mem/Maintenance_work_mem is
going to be a crap shoot, since the defaults still seem to be based
upon 256MB of RAM or less.
Why a crap shoot?
Set shared_buffers to 12GB. Set