Guys,
I got a Java program to tune. It connects to a 7.4.1 postgresql server
running Linux using JDBC.
The program needs to update a counter on a somewhat large number of
rows, about 1200 on a ~130k rows table. The query is something like
the following:
UPDATE table SET table.par = table.par +
On Wed, 2004-03-10 at 12:09, Gavin M. Roy wrote:
> Might want to look at metalog, it does delayed writes, though ultimately
> your issue is io bound and there's not much you can do to reduce io if
> you want to keep syslog logging your pgsql queries and such.
Yeah, but syslog with fsync() after
"Shea,Dan [CIS]" <[EMAIL PROTECTED]> writes:
> How can you improve the performance of cluster?
> 1. BY increasing sort_mem?
Yes.
regards, tom lane
---(end of broadcast)---
TIP 9: the planner will ignore your desire to choose
Might want to look at metalog, it does delayed writes, though ultimately
your issue is io bound and there's not much you can do to reduce io if
you want to keep syslog logging your pgsql queries and such.
Tom Lane wrote:
Greg Spiegelberg <[EMAIL PROTECTED]> writes:
I turned syslog back on an
Chris Kratz <[EMAIL PROTECTED]> writes:
> There are about 67 inherited tables that inherit the fields from this table,
> hence the 134 constraint triggers.
Why "hence"? Inheritance doesn't create any FK relationships. You must
have done so. What are those FK constraints exactly?
> Some of t
Greg Spiegelberg <[EMAIL PROTECTED]> writes:
> I turned syslog back on and the restore slowed down again. Turned
> it off and it sped right back up.
We have heard reports before of syslog being quite slow. What platform
are you on exactly? Does Richard's suggestion of turning off syslog's
fsync
On Wed, 3 Mar 2004, Chris Kratz wrote:
> Which certainly points to the triggers being the culprit. In reading the
> documentation, it seems like the "delete from only..." statement should
> ignore the constraint triggers. But it seems quite obvious from the
Delete from only merely means that c
Hello all,
I have a performance issue that I cannot seem to solve and am hoping that
someone might be able to make some suggestions.
First some background information. We are using PostgreSQL 7.3.4 on Linux
with kernel 2.4.19. The box is a single P4 2.4Ghz proc with 1G ram and uw
scsi drive
Andrew Sullivan wrote:
On Thu, Feb 26, 2004 at 12:46:23PM +, teknokrat wrote:
I've read about the place. Would using -O3 be an improvement?
In my experience, it's not only not an improvement, it sometimes
breaks the code. That's on 8, though, not 9.
A
thanks, i remember a thread about prob
hi,
John Siracusa wrote, On 3/3/2004 20:56:
Given an index like this:
CREATE UNIQUE INDEX i1 ON t1 (c1) WHERE c1 IS NOT NULL;
and a query like this:
SELECT * FROM t1 WHERE c1 = 123;
I'd like the planner to be smart enough to use an index scan using i1. Yes,
I can change the query to
I'd look at adding more disks first. Depending on what
type of query
load you get, that box sounds like it will be very
much I/O bound
Given a a 13G database on a 12G system, with a low
growth rate, it is likely that there is almost no I/O
for most activities. The exception is checkpointi
Dan,
> INFO: vacuuming "public.forecastelement"
> INFO: "forecastelement": found 93351479 removable, 219177133 nonremovable
The high number of nonremovable above probably indicates that you have a
transaction being held open which prevents VACUUM from being effective.
Look for long-hung pro
On Tuesday 09 March 2004 20:29, Greg Spiegelberg wrote:
> iostat reported ~2000 blocks written every 2
> seconds to the DB file system.
>
> I turned syslog off to see if it was blocking anything and in the
> past couple minutes 1GB has been restored and iostat reports ~35,000
> blocks written ever
I've been waiting all day for a pg_restore to finish on a test system
identically configured as our production in hardware and software
with the exception prod is 7.3.5 and test is 7.4.1.
The file it's restoring from is about 8GB uncompressed from a
"pg_dump -b -F t" and after 2 hours the directory
David Teran <[EMAIL PROTECTED]> writes:
> Merge Join (cost=5369.08..5383.14 rows=150 width=4) (actual
> time=2.527..2.874 rows=43 loops=1)
> Merge Cond: ("outer".id_job_attribute = "inner".id_job_attribute)
> Join Filter: ("inner".int_value < "outer".int_value)
> -> Sort (cost=26
Don't bother with C function, use SQL function instead. You could get
a 50% speedup.
Is this always the case when using SQL instead of the C API to get
values or only the function 'call' itself? We are thinking to use C
functions which are optimized for the G5 altivec unit.
SQL functions are s
Hi,
On 08.03.2004, at 02:29, Christopher Kings-Lynne wrote:
explain analyze select
((t0.int_value-t1.int_value)*(t0.int_value-t1.int_value))
from job_property t0, job_property t1
where t0.id_job_profile = 5
and t1.id_job_profile = 6
and t1.id_job_attribute = t0.id_job_attribute
and t1.int_value <
17 matches
Mail list logo