with my application, it seems that size of cache has great effect:
from 512 Kb of L2 cache to 1Mb boost performance with a factor 3 and
20% again from 1Mb L2 cache to 2Mb L2 cache.
I don't understand why a 512Kb cache L2 is too small to fit the data's
does it exist a tool to trace processor
Hello
I have a large database with 4 large tables (each containing at least
200 000 rows, perhaps even 1 or 2 million) and i ask myself if it's
better to split them into small tables (e.g tables of 2000 rows) to
speed the access and the update of those tables (considering that i will
have
On Wed, Jul 13, 2005 at 09:52:20AM +0800, Christopher Kings-Lynne wrote:
The 8.0.2 jdbc driver uses real prepared statements instead of faked
ones. The problem is the new protocol (that the 8.0.2 driver users) has
a bug where protocol-prepared queries don't get logged properly.
I don't know
Nicolas,
These sizes would not be considered large. I would leave them
as single tables.
Ken
ok, i though it was large but i must confess i'm relatively new in the
database word. thank you for the answer.
Just another question : what is the maximal number of rows that can be
contain in
On Tue, 2005-07-12 at 13:50 -0400, Ian Westmacott wrote:
It appears not to matter whether it is one of the tables
being written to that is ANALYZEd. I can ANALYZE an old,
quiescent table, or a system table and see this effect.
Can you confirm that this effect is still seen even when the
On Wed, 2005-07-13 at 10:20 +0200, Jean-Max Reymond wrote:
with my application, it seems that size of cache has great effect:
from 512 Kb of L2 cache to 1Mb boost performance with a factor 3 and
20% again from 1Mb L2 cache to 2Mb L2 cache.
Memory request time is the main bottleneck in well
On Wed, 2005-07-13 at 09:52 +0800, Christopher Kings-Lynne wrote:
Is there a different kind of 'prepared' statements
that we should be using in the driver to get logging
to work properly? What is the 'new' protocol?
The 8.0.2 jdbc driver uses real prepared statements instead of faked
On Wed, 2005-07-13 at 11:55, Simon Riggs wrote:
On Tue, 2005-07-12 at 13:50 -0400, Ian Westmacott wrote:
It appears not to matter whether it is one of the tables
being written to that is ANALYZEd. I can ANALYZE an old,
quiescent table, or a system table and see this effect.
Can you
Gurus,
A table in one of my databases has just crossed the 30 million row
mark and has begun to feel very sluggish for just about anything I do
with it. I keep the entire database vacuumed regularly. And, as
long as I'm not doing a sequential scan, things seem reasonably quick
most of
Ian Westmacott [EMAIL PROTECTED] writes:
On Wed, 2005-07-13 at 11:55, Simon Riggs wrote:
On Tue, 2005-07-12 at 13:50 -0400, Ian Westmacott wrote:
It appears not to matter whether it is one of the tables
being written to that is ANALYZEd. I can ANALYZE an old,
quiescent table, or a system
On Jul 13, 2005, at 1:11 PM, John A Meinel wrote:
I might be wrong, but there may be something much more substantially
wrong than slow i/o.
John
Yes, I'm afraid of that too. I just don't know what tools I should
use to figure that out. I have some 20 other databases on this
system,
* Dan Harris ([EMAIL PROTECTED]) wrote:
On Jul 13, 2005, at 1:11 PM, John A Meinel wrote:
I might be wrong, but there may be something much more substantially
wrong than slow i/o.
Yes, I'm afraid of that too. I just don't know what tools I should
use to figure that out. I have some 20
On Wed, Jul 13, 2005 at 01:16:25PM -0600, Dan Harris wrote:
On Jul 13, 2005, at 1:11 PM, John A Meinel wrote:
I might be wrong, but there may be something much more substantially
wrong than slow i/o.
Yes, I'm afraid of that too. I just don't know what tools I should
use to figure that
On Jul 13, 2005, at 2:17 PM, Stephen Frost wrote:
Could you come up w/ a test case that others could reproduce where
explain isn't returning?
This was simply due to my n00bness :) I had always been doing
explain analyze, instead of just explain. Next time one of these
queries comes up,
On Wed, 2005-07-13 at 14:58 -0400, Tom Lane wrote:
Ian Westmacott [EMAIL PROTECTED] writes:
On Wed, 2005-07-13 at 11:55, Simon Riggs wrote:
On Tue, 2005-07-12 at 13:50 -0400, Ian Westmacott wrote:
It appears not to matter whether it is one of the tables
being written to that is ANALYZEd.
On Jul 13, 2005, at 2:54 PM, Dan Harris wrote:
4 x 2.2GHz Opterons
12 GB of RAM
4x10k 73GB Ultra320 SCSI drives in RAID 0+1
1GB hardware cache memory on the RAID controller
if it is taking that long to update about 25% of your table, then you
must be I/O bound. check I/o while you're
I can at least report that the problem does not seem to
occur with Postgres 8.0.1 running on a dual Opteron.
--Ian
On Wed, 2005-07-13 at 16:39, Simon Riggs wrote:
On Wed, 2005-07-13 at 14:58 -0400, Tom Lane wrote:
Ian Westmacott [EMAIL PROTECTED] writes:
On Wed, 2005-07-13 at
On Wed, 2005-07-13 at 12:54 -0600, Dan Harris wrote:
For example, as I'm writing this, I am running an UPDATE statement
that will affect a small part of the table, and is querying on an
indexed boolean field.
An indexed boolean field?
Hopefully, ftindex is false for very few rows of the
Hi,
I'm having a problem with a query that performs a sequential scan on a
table when it should be performing an index scan. The interesting thing
is, when we dumped the database on another server, it performed an index
scan on that server. The systems are running the same versions of
Dennis [EMAIL PROTECTED] writes
checking the status of connections at this point ( ps -eaf | grep
postgres:) where the CPU is maxed out I saw this:
127 idle
12 bind
38 parse
34 select
Are you sure 100% CPU usage is solely contributed by Postgresql? Also, from
the ps status you list, I
Qingqing Zhou wrote:
Are you sure 100% CPU usage is solely contributed by Postgresql? Also, from
the ps status you list, I can hardly see that's a problem because of problem
you mentioned below.
The postgreSQL processes are what is taking up all the cpu. There aren't
any other major
What is the load average on this machine? Do you do many updates? If you
do a lot of updates, perhaps you haven't vacuumed recently. We were
seeing similar symptoms when we started load testing our stuff and it
turned out we were vacuuming too infrequently.
David
Dennis wrote:
Qingqing Zhou
Hi,
Our application requires a number of processes to select and update rows
from a very small (10 rows) Postgres table on a regular and frequent
basis. These processes often run for weeks at a time, but over the
space of a few days we find that updates start getting painfully slow.
We are
23 matches
Mail list logo