with my application, it seems that size of cache has great effect:
from 512 Kb of L2 cache to 1Mb boost performance with a factor 3 and
20% again from 1Mb L2 cache to 2Mb L2 cache.
I don't understand why a 512Kb cache L2 is too small to fit the data's
does it exist a tool to trace processor activit
Hello
I have a large database with 4 large tables (each containing at least
200 000 rows, perhaps even 1 or 2 million) and i ask myself if it's
better to split them into small tables (e.g tables of 2000 rows) to
speed the access and the update of those tables (considering that i will
have few
On Wed, Jul 13, 2005 at 09:52:20AM +0800, Christopher Kings-Lynne wrote:
> The 8.0.2 jdbc driver uses real prepared statements instead of faked
> ones. The problem is the new protocol (that the 8.0.2 driver users) has
> a bug where protocol-prepared queries don't get logged properly.
> I don't k
Nicolas,
These sizes would not be considered large. I would leave them
as single tables.
Ken
ok, i though it was large but i must confess i'm relatively new in the
database word. thank you for the answer.
Just another question : what is the maximal number of rows that can be
contain in a
On Tue, 2005-07-12 at 13:50 -0400, Ian Westmacott wrote:
> It appears not to matter whether it is one of the tables
> being written to that is ANALYZEd. I can ANALYZE an old,
> quiescent table, or a system table and see this effect.
Can you confirm that this effect is still seen even when the ANA
On Wed, 2005-07-13 at 10:20 +0200, Jean-Max Reymond wrote:
> with my application, it seems that size of cache has great effect:
> from 512 Kb of L2 cache to 1Mb boost performance with a factor 3 and
> 20% again from 1Mb L2 cache to 2Mb L2 cache.
Memory request time is the main bottleneck in well t
On Wed, 2005-07-13 at 09:52 +0800, Christopher Kings-Lynne wrote:
> > Is there a different kind of 'prepared' statements
> > that we should be using in the driver to get logging
> > to work properly? What is the 'new' protocol?
>
> The 8.0.2 jdbc driver uses real prepared statements instead of fa
On Wed, 2005-07-13 at 11:55, Simon Riggs wrote:
> On Tue, 2005-07-12 at 13:50 -0400, Ian Westmacott wrote:
> > It appears not to matter whether it is one of the tables
> > being written to that is ANALYZEd. I can ANALYZE an old,
> > quiescent table, or a system table and see this effect.
>
> Can
Gurus,
A table in one of my databases has just crossed the 30 million row
mark and has begun to feel very sluggish for just about anything I do
with it. I keep the entire database vacuumed regularly. And, as
long as I'm not doing a sequential scan, things seem reasonably quick
most of t
Ian Westmacott <[EMAIL PROTECTED]> writes:
> On Wed, 2005-07-13 at 11:55, Simon Riggs wrote:
>> On Tue, 2005-07-12 at 13:50 -0400, Ian Westmacott wrote:
>>> It appears not to matter whether it is one of the tables
>>> being written to that is ANALYZEd. I can ANALYZE an old,
>>> quiescent table, or
So sorry, I forgot to mention I'm running version 8.0.1
Thanks
---(end of broadcast)---
TIP 6: explain analyze is your friend
Hi,
I've got a java based web application that uses PostgreSQL 8.0.2.
PostgreSQL runs on its own machine with RHEL 3, ia32e kernel, dual Xeon
processor, 4 Gb ram.
The web application runs on a seperate machine from the database. The
application machine has three tomcat instances configured t
Dan Harris wrote:
Gurus,
> even the explain never
finishes when I try that.
Just a short bit. If "EXPLAIN SELECT" doesn't return, there seems to be
a very serious problem. Because I think EXPLAIN doesn't actually run the
query, just has the query planner run. And the query planner shouldn'
On Jul 13, 2005, at 1:11 PM, John A Meinel wrote:
I might be wrong, but there may be something much more substantially
wrong than slow i/o.
John
Yes, I'm afraid of that too. I just don't know what tools I should
use to figure that out. I have some 20 other databases on this
system, sa
* Dan Harris ([EMAIL PROTECTED]) wrote:
> On Jul 13, 2005, at 1:11 PM, John A Meinel wrote:
> >I might be wrong, but there may be something much more substantially
> >wrong than slow i/o.
>
> Yes, I'm afraid of that too. I just don't know what tools I should
> use to figure that out. I have so
On Wed, Jul 13, 2005 at 01:16:25PM -0600, Dan Harris wrote:
> On Jul 13, 2005, at 1:11 PM, John A Meinel wrote:
>
> >I might be wrong, but there may be something much more substantially
> >wrong than slow i/o.
>
> Yes, I'm afraid of that too. I just don't know what tools I should
> use to fig
On Jul 13, 2005, at 2:17 PM, Stephen Frost wrote:
Could you come up w/ a test case that others could reproduce where
explain isn't returning?
This was simply due to my n00bness :) I had always been doing
explain analyze, instead of just explain. Next time one of these
queries comes up,
On Wed, 2005-07-13 at 14:58 -0400, Tom Lane wrote:
> Ian Westmacott <[EMAIL PROTECTED]> writes:
> > On Wed, 2005-07-13 at 11:55, Simon Riggs wrote:
> >> On Tue, 2005-07-12 at 13:50 -0400, Ian Westmacott wrote:
> >>> It appears not to matter whether it is one of the tables
> >>> being written to tha
On Jul 13, 2005, at 2:54 PM, Dan Harris wrote:
4 x 2.2GHz Opterons
12 GB of RAM
4x10k 73GB Ultra320 SCSI drives in RAID 0+1
1GB hardware cache memory on the RAID controller
if it is taking that long to update about 25% of your table, then you
must be I/O bound. check I/o while you're runni
I can at least report that the problem does not seem to
occur with Postgres 8.0.1 running on a dual Opteron.
--Ian
On Wed, 2005-07-13 at 16:39, Simon Riggs wrote:
> On Wed, 2005-07-13 at 14:58 -0400, Tom Lane wrote:
> > Ian Westmacott <[EMAIL PROTECTED]> writes:
> > > On Wed, 2005-07-13
On Wed, 2005-07-13 at 12:54 -0600, Dan Harris wrote:
> For example, as I'm writing this, I am running an UPDATE statement
> that will affect a small part of the table, and is querying on an
> indexed boolean field.
An indexed boolean field?
Hopefully, ftindex is false for very few rows of the
Hi,
I'm having a problem with a query that performs a sequential scan on a
table when it should be performing an index scan. The interesting thing
is, when we dumped the database on another server, it performed an index
scan on that server. The systems are running the same versions of
postgre
"Dennis" <[EMAIL PROTECTED]> writes
>
> checking the status of connections at this point ( ps -eaf | grep
> "postgres:") where the CPU is maxed out I saw this:
>
> 127 idle
> 12 bind
> 38 parse
> 34 select
>
Are you sure 100% CPU usage is solely contributed by Postgresql? Also, from
the ps statu
Qingqing Zhou wrote:
Are you sure 100% CPU usage is solely contributed by Postgresql? Also, from
the ps status you list, I can hardly see that's a problem because of problem
you mentioned below.
The postgreSQL processes are what is taking up all the cpu. There aren't
any other major applicat
What is the load average on this machine? Do you do many updates? If you
do a lot of updates, perhaps you haven't vacuumed recently. We were
seeing similar symptoms when we started load testing our stuff and it
turned out we were vacuuming too infrequently.
David
Dennis wrote:
Qingqing Zhou
Hi,
Our application requires a number of processes to select and update rows
from a very small (<10 rows) Postgres table on a regular and frequent
basis. These processes often run for weeks at a time, but over the
space of a few days we find that updates start getting painfully slow.
We are runni
Is there any MS-SQL Server like 'Profiler' available for PostgreSQL? A profiler is a tool that monitors the database server and outputs a detailed trace of all the transactions/queries that are executed on a database during a specified period of time. Kindly let me know if any of you knows of such
Dan Harris <[EMAIL PROTECTED]> writes:
> I keep the entire database vacuumed regularly.
How often is "regularly"? We get frequent posts from people who think daily or
every 4 hours is often enough. If the table is very busy you can need vacuums
as often as every 15 minutes.
Also, if you've done
Try turning on query logging and using the 'pqa' utility on pgfoundry.org.
Chris
Agha Asif Raza wrote:
Is there any MS-SQL Server like 'Profiler' available for PostgreSQL? A
profiler is a tool that monitors the database server and outputs a
detailed trace of all the transactions/queries that a
[reposted due to delivery error -jwb]
I just took delivery of a new system, and used the opportunity to
benchmark postgresql 8.0 performance on various filesystems. The system
in question runs Linux 2.6.12, has one CPU and 1GB of system memory, and
5 7200RPM SATA disks attached to an Areca hardwa
Agha Asif Raza wrote:
> Is there any MS-SQL Server like 'Profiler' available for PostgreSQL? A
> profiler is a tool that monitors the database server and outputs a detailed
> trace of all the transactions/queries that are executed on a database during
> a specified period of time. Kindly let me
On Jul 14, 2005, at 12:12 AM, Greg Stark wrote:
Dan Harris <[EMAIL PROTECTED]> writes:
I keep the entire database vacuumed regularly.
How often is "regularly"?
Well, once every day, but there aren't a ton of inserts or updates
going on a daily basis. Maybe 1,000 total inserts?
Also,
David Mitchell wrote:
What is the load average on this machine? Do you do many updates? If
you do a lot of updates, perhaps you haven't vacuumed recently. We
were seeing similar symptoms when we started load testing our stuff
and it turned out we were vacuuming too infrequently.
The load ave
33 matches
Mail list logo