Today is the first official day of this weeks and the system run better in
serveral points but there are still some points that need to be corrected. Some
queries or some tables are very slow. I think the queries inside the programe
need to be rewrite.
Now I put the sort mem to a little bit
On Tue, 4 Jan 2005 [EMAIL PROTECTED] wrote:
Today is the first official day of this weeks and the system run better in
serveral points but there are still some points that need to be corrected.
Some
queries or some tables are very slow. I think the queries inside the programe
need to be
Dave Cramer wrote:
Well, it's not quite that simple
the rule of thumb is 6-10% of available memory before postgres loads
is allocated to shared_buffers.
then effective cache is set to the SUM of shared_buffers + kernel buffers
Then you have to look at individual slow queries to determine why
Yann,
are there any plans for rewriting queries to preexisting materialized
views? I mean, rewrite a query (within the optimizer) to use a
materialized view instead of the originating table?
Automatically, and by default, no. Using the RULES system? Yes, you can
already do this and the
All,
I am currently working on a project for my company that entails
Databasing upwards of 300 million specific parameters. In the current
DB Design, these parameters are mapped against two lookup tables (2
million, and 1.5 million respectively) and I am having extreme issues
getting PG to
1)the 250 million records are currently whipped and reinserted as a
daily snapshot and the fastest way I have found COPY to do this from
a file is no where near fast enough to do this. SQL*Loader from Oracle
does some things that I need, ie Direct Path to the db files access
(skipping the
Wagner,
If there is anyone that can give me some tweak parameters or design
help on this, it would be ridiculously appreciated. I have already
created this in Oracle and it works, but we don't want to have to pay
the monster if something as wonderful as Postgres can handle it.
In
Rod,
I do this, PG gets forked many times, it is tough to find the max
number of times I can do this, but I have a Proc::Queue Manager Perl
driver that handles all of the copy calls. I have a quad CPU machine.
Each COPY only hits ones CPU for like 2.1% but anything over about 5
kicks the load
Martha Stewart called it a Good Thing when [EMAIL PROTECTED] (Pallav Kalva)
wrote:
Then you have to look at individual slow queries to determine why
they are slow, fortunately you are running 7.4 so you can set
log_min_duration to some number like 1000ms and then
try to analyze why those
Ryan,
I do this, PG gets forked many times, it is tough to find the max
number of times I can do this, but I have a Proc::Queue Manager Perl
driver that handles all of the copy calls. I have a quad CPU machine.
Each COPY only hits ones CPU for like 2.1% but anything over about 5
On Tue, 2005-01-04 at 14:02 -0500, Rod Taylor wrote:
1)the 250 million records are currently whipped and reinserted as a
daily snapshot and the fastest way I have found COPY to do this from
a file is no where near fast enough to do this. SQL*Loader from Oracle
does some things that I
I will put more ram but someone said RH 9.0 had poor recognition on the Ram
above 4 Gb?
I think they were refering to 32 bit architectures, not distributions as
such.
Sorry for wrong reason , then should I increase more RAM than 4 Gb. on 32 bit
Arche.?
Should I close the hyperthreading
12 matches
Mail list logo