Hi,
Did someone try '-mfpmath=sse -msse3'?
Would be interesting to know if -mfpmath=sse boost the performance.
I guess, the difference in the generated code isn't that much between
i686 and prescott. The bigger step is i386 to i686. '-mfpmath=sse
-msse3' will also use the SSE unit, which the
Hi list !
I am running a query to update the boolean field of a table based on
another table's fields.
The query is (changed names for readability):
UPDATE t1
SET booleanfield = (t2.field1 IN ('some', 'other') AND t2.field2 = 'Y')
FROM t2
WHERE t1.uid = t2.uid
t2.uid is the PRIMARY KEY.
t2
On mið, 2006-12-13 at 11:51 +0100, Arnaud Lesauvage wrote:
Hi list !
I am running a query to update the boolean field of a table based on
another table's fields.
The query is (changed names for readability):
UPDATE t1
SET booleanfield = (t2.field1 IN ('some', 'other') AND t2.field2 =
Ragnar a écrit :
Do you think the problem is with the indexes ?
I guess so. are you sure about the index on t1.uid?
what are the column definitions for t1.uid and t2.uid ?
are they the same ?
Man, no !!!
I just checked and indeed, no index on this column. I probably dropped
it lately.
Hi,
the problem is a combination of bad formed SQL and maybe missing indexes.
try this:
UPDATE t1
SET booleanfield = foo.bar
FROM (SELECT uid,(field IN ('some','other') AND field2 = 'Y') AS bar FROM
t2) AS foo
WHERE t1.uid=foo.uid;
and index t1.uid, t2.uid, t2.field, t2.field2
regards,
Jens
Jens Schipkowski a écrit :
the problem is a combination of bad formed SQL and maybe missing indexes.
try this:
UPDATE t1
SET booleanfield = foo.bar
FROM (SELECT uid,(field IN ('some','other') AND field2 = 'Y') AS bar FROM
t2) AS foo
WHERE t1.uid=foo.uid;
Hi Jens,
Why is this query better
On Wed, 13 Dec 2006 13:23:41 +0100, Arnaud Lesauvage [EMAIL PROTECTED]
wrote:
Jens Schipkowski a écrit :
the problem is a combination of bad formed SQL and maybe missing
indexes.
try this:
UPDATE t1
SET booleanfield = foo.bar
FROM (SELECT uid,(field IN ('some','other') AND field2 = 'Y')
Jens Schipkowski a écrit :
On Wed, 13 Dec 2006 13:23:41 +0100, Arnaud Lesauvage [EMAIL PROTECTED]
Why is this query better than the other one ? Because it runs the
(field IN ('some','other') AND field2 = 'Y') once and then executes
the join with the resulting set ?
True. The Subselect in
On mið, 2006-12-13 at 14:38 +0100, Arnaud Lesauvage wrote:
Jens Schipkowski a écrit :
On Wed, 13 Dec 2006 13:23:41 +0100, Arnaud Lesauvage [EMAIL PROTECTED]
Why is this query better than the other one ? Because it runs the
(field IN ('some','other') AND field2 = 'Y') once and then
Ragnar a écrit :
On mið, 2006-12-13 at 14:38 +0100, Arnaud Lesauvage wrote:
Jens Schipkowski a écrit :
On Wed, 13 Dec 2006 13:23:41 +0100, Arnaud Lesauvage [EMAIL PROTECTED]
Why is this query better than the other one ? Because it runs the
(field IN ('some','other') AND field2 = 'Y')
Michael Stone wrote:
On Tue, Dec 12, 2006 at 01:42:06PM +0100, Cosimo Streppone wrote:
-O0 ~ 957 tps
-O1 -mcpu=pentium4 -mtune=pentium4 ~ 1186 tps
-O2 -mcpu=pentium4 -mtune=pentium4 ~ 1229 tps
-O3 -mcpu=pentium4 -mtune=pentium4 ~ 1257 tps
-O6 -mcpu=pentium4 -mtune=pentium4 ~ 1254 tps
I'm
Arnaud,
Have you run ANALYZE on the table after creating index?
Also make sure that #effective_cache_size is set properly. A higher
value makes it more likely to use index scans.
Thanks
asif ali
Arnaud Lesauvage [EMAIL PROTECTED] wrote: Ragnar a écrit :
On mið, 2006-12-13 at 14:38
Hi,
Our application is using Postgres 7.4 and I'd like to understand the root
cause of this problem:
To speed up overall insert time, our application will write thousands of
rows, one by one, into a temp table (same structure as a permanent table),
then do a bulk insert from the temp table to
Arnaud Lesauvage [EMAIL PROTECTED] writes:
Indeed, the new query does not perform that well :
Hash Join (cost=112.75..307504.97 rows=2024869 width=355) (actual
time=53.995..246443.811 rows=2020061 loops=1)
...
Total runtime: 2777844.892 ms
I removed all unnecessary indexes on t1 before
asif ali a écrit :
Arnaud,
Have you run ANALYZE on the table after creating index?
Yes, I have !
Also make sure that #effective_cache_size is set properly. A higher
value makes it more likely to use index scans.
It is set to 50.000. I thought this would be enough, and maybe too
On 12/13/06, Steven Flatt [EMAIL PROTECTED] wrote:
Hi,
Our application is using Postgres 7.4 and I'd like to understand the root
cause of this problem:
To speed up overall insert time, our application will write thousands of
rows, one by one, into a temp table
1. how frequently are you
At 11:11 AM 12/13/2006, Cosimo Streppone wrote:
Interesting, eh?
Cosimo
What I find interesting is that so far Guido's C2D Mac laptop has
gotten the highest values by far in this set of experiments, and no
one else is even close.
The slowest results, Michael's, are on the system with what
Mark Lewis [EMAIL PROTECTED] writes:
But he's using 8.1.4-- in that version, an explain analyze would list
the time taken to go through triggers, so the fact that we don't see any
of those lines means that it can't be constraint checking, so wouldn't
it have to be the index update overhead?
What I find interesting is that so far Guido's C2D Mac laptop has
gotten the highest values by far in this set of experiments, and no
one else is even close.
The slowest results, Michael's, are on the system with what appears
to be the slowest CPU of the bunch; and the ranking of the rest of
Hi,
I have a query that uses an IN clause and it seems in perform great
when there is more than two values in it but if there is only one it is
really slow. Also if I change the query to use an = instead of IN in the
case of only one value it is still slow. Possibly I need to reindex this
On mið, 2006-12-13 at 13:42 -0500, Tim Jones wrote:
I have a query that uses an IN clause and it seems in perform great
when there is more than two values in it but if there is only one it is
really slow. Also if I change the query to use an = instead of IN in the
case of only one value it
Tim Jones [EMAIL PROTECTED] writes:
I have a query that uses an IN clause and it seems in perform great
when there is more than two values in it but if there is only one it is
really slow. Also if I change the query to use an = instead of IN in the
case of only one value it is still slow.
I've currently got this table:
,
| n=# \d nanpa
| Table public.nanpa
|Column | Type | Modifiers
| +--+---
| state | character(2) |
| npa| character(3) | not null
| nxx| character(3) | not null
| ocn|
Have you run vacuum/analyze on the table?
--
Husam
-Original Message-
From: [EMAIL PROTECTED]
[mailto:[EMAIL PROTECTED] On Behalf Of James Cloos
Sent: Wednesday, December 13, 2006 10:48 AM
To: pgsql-performance@postgresql.org
Subject: [PERFORM] Optimizing a query
I've currently got
The tables for theses queries are vacuumed and analyzed regularly. I
just did an analyze to be sure and here are the results
explain analyze select * from battery join observationresults on
battery.batteryidentifier = observationresults.batteryidentifier left
outer join observationcomment on
At 01:49 PM 12/13/2006, Bucky Jordan wrote:
I've only seen pg_bench numbers 2,000 tps on either really large
hardware (none of the above mentioned comes close) or the results
are in memory due to a small database size (aka measuring update contention).
Which makes a laptop achieving such
On 13.12.2006, at 19:03, Ron wrote:
What I find interesting is that so far Guido's C2D Mac laptop has
gotten the highest values by far in this set of experiments, and no
one else is even close.
This might be the case because I have tested with fsync=off as my
internal harddrive would be
James Cloos [EMAIL PROTECTED] writes:
... and was doing queries of the form:
| select * from nanpa where npa=775 and nxx=413;
If those are char(3) columns, shouldn't you be quoting the constants?
select * from nanpa where npa='775' and nxx='413';
Any non-numeric input will fail entirely
Version 8.1
Here are the planner constraints I believe we changed
effective_cache_size and random_page_cost
BTW this is an AIX 5.2
#---
# QUERY TUNING
Husam == Tomeh, Husam [EMAIL PROTECTED] writes:
Husam Have you run vacuum/analyze on the table?
Yes, back when I first noticed how slow it was.
It did not make any difference.
explain analyze says:
,
| n=# explain analyse select * from nanpa where npa=775 and nxx=473;
|
Tim Jones [EMAIL PROTECTED] writes:
Which PG version is this exactly? Are you running with any nondefault
planner parameters?
Version 8.1
8.1.what?
Here are the planner constraints I believe we changed
effective_cache_size and random_page_cost
Those look reasonably harmless.
My best bet
Looks like 8.1.2
Tim Jones
Healthcare Project Manager
Optio Software, Inc.
(770) 576-3555
-Original Message-
From: Tom Lane [mailto:[EMAIL PROTECTED]
Sent: Wednesday, December 13, 2006 5:37 PM
To: Tim Jones
Cc: pgsql-performance@postgresql.org
Subject: Re: [PERFORM] strange query
That's what I did and got 8.1.2 ... do you want gcc version etc 3.3.2
powerpc aix5.2
Tim Jones
Healthcare Project Manager
Optio Software, Inc.
(770) 576-3555
-Original Message-
From: Matthew O'Connor [mailto:[EMAIL PROTECTED]
Sent: Wednesday, December 13, 2006 5:51 PM
To: Tim Jones
Your nap and nxx columns have character datatype, so you should use
quotes. Try:
explain analyze select * from nanpa where npa='775' and
nxx='473';
If that does not work, you could try to influence the planner's
execution plan to favor index scans over sequential scan by tweaking a
After running some further standalone tests using temp tables, I'm not
convinced the problem is specific to temp table usage. In fact it looks
like generic SQL activity degrades over time.
Having said that, what kinds of things should I be looking for that could
deteriorate/bloat over time?
Tim Jones [EMAIL PROTECTED] writes:
[ explain results ]
As best I can see, the problem is with the estimate of the size of the
inner join: for two keys we have
- Nested Loop (cost=4.01..9410.49 rows=13 width=145) (actual
time=0.227..0.416 rows=30 loops=1)
- Bitmap
Steven Flatt [EMAIL PROTECTED] writes:
Having said that, what kinds of things should I be looking for that could
deteriorate/bloat over time? Ordinarily the culprit might be infrequent
vacuuming or analyzing, but that wouldn't be corrected by a restart of
Postgres. In our case, restarting
Steven Flatt [EMAIL PROTECTED] writes:
Having said that, what kinds of things should I be looking for that could
deteriorate/bloat over time? Ordinarily the culprit might be infrequent
vacuuming or analyzing, but that wouldn't be corrected by a restart of
Postgres. In our case, restarting
Tom Lane wrote:
Alvaro Herrera [EMAIL PROTECTED] writes:
While skimming over the pgbench source it has looked to me like it's
necessary to pass the -s switch (scale factor) to both the
initialization (-i) and the subsequent (non -i) runs.
No, it's not supposed to be, and I've never found
[offtopic];
hmm quite a long thread below is stats of posting
Total Messages:87Total Participants: 27
-
19 Daniel van Ham Colchete
12 Michael Stone
9 Ron
5 Steinar H. Gunderson
5 Alexander Staubo
4 Tom Lane
4
Bruce,
pgbench is designed to be a general benchmark, meanining it exercises
all parts of the system. I am thinking just reexecuting a single SELECT
over and over again would be a better test of the CPU optimizations.
Mostly, though, pgbench just gives the I/O system a workout. It's not a
On Wed, 2006-12-13 at 18:36 -0800, Josh Berkus wrote:
Bruce,
pgbench is designed to be a general benchmark, meanining it exercises
all parts of the system. I am thinking just reexecuting a single SELECT
over and over again would be a better test of the CPU optimizations.
Mostly,
Joshua D. Drake [EMAIL PROTECTED] writes:
On Wed, 2006-12-13 at 18:36 -0800, Josh Berkus wrote:
Mostly, though, pgbench just gives the I/O system a workout. It's not a
really good general workload.
It also will not utilize all cpus on a many cpu machine. We recently
found that the only way
On Dec 14, 2006, at 14:44 , Tom Lane wrote:
The pgbench app itself becomes the bottleneck at high transaction
rates. Awhile back I rewrote it to improve its ability to issue
commands concurrently, but then desisted from submitting the
changes --- if we change the app like that, future numbers
On Dec 11, 2006, at 12:54 PM, Bruno Wolff III wrote:
On Wed, Dec 06, 2006 at 08:55:14 -0800,
Mark Lewis [EMAIL PROTECTED] wrote:
Anyone run their RAIDs with disk caches enabled, or is this akin to
having fsync off?
Disk write caches are basically always akin to having fsync off. The
only
Benchmarks, like any other SW, need modernizing and updating from time to time.
Given the multi-core CPU approach to higher performance as the
current fad in CPU architecture, we need a benchmark that is appropriate.
If SPEC feels it is appropriate to rev their benchmark suite
regularly, we
(Re)-Design it to do both, unless there's reason to believe that doing one
after the other would skew the results.
Then old results are available, new results are also visible and useful for
future comparisons. And seeing them side by side mught be an interesting
exercise as well, at least for
47 matches
Mail list logo