Look like the mysql people found a subquery that postgresql doesn't
handle as good as possible:
http://s.petrunia.net/blog/
Is there some deeper issue here that I fail to see or is it simply that
it hasn't been implemented but is fairly straigt forward? In the link
above they do state that
Jason Lustig skrev:
and work_mem to 8096. What would cause the computer to only use such a
small percentage of the CPU, with more than half of it waiting on I/O
requests?
Do your webpages write things to the database on each connect?
Maybe it do a bunch of writes each individually commited?
Carlos Moreno skrev:
The system does very frequent insertions and updates --- the longest
table has, perhaps, some 20 million rows, and it's indexed (the primary
key is the combination of two integer fields). This longest table only
has inserts (and much less frequent selects), at a peak rate
Carlos Moreno skrev:
When I force it via set enable_seqscan to off, the index scan
takes about 0.1 msec (as reported by explain analyze), whereas
For the time being, I'm using an explicit enable_seqscan off
in the client code, before executing the select. But I wonder:
Is this still an
Rolf Østvik skrev:
I have a simple query which uses 32ms on 7.4.14 and 1015ms on 8.2.0.
I guess 7.4.14 creates a better execution plan than 8.2.0 for this query but
Try to turn off planner options in 8.2 to make it generate the same plan
as 7.4. Then run EXPLAIN ANALYZE on that query that
On Thu, 29 Dec 2005, Arnau wrote:
Which is the best way to import data to tables? I have to import
9 rows into a column and doing it as inserts takes ages. Would be
faster with copy? is there any other alternative to insert/copy?
Wrap the inserts inside a BEGIN/COMMIT block and it
On Sun, 6 Nov 2005, PostgreSQL wrote:
SELECT v_barcode, count(v_barcode) FROM lead GROUP BY v_barcode HAVING
count(*) 1;
This is a dual Opteron box with 16 Gb memory and a 3ware SATA raid
runing 64bit SUSE. Something seems badly wrong.
GroupAggregate (cost=9899282.83..10285434.26
On Wed, 28 Sep 2005, Joe wrote:
Before I post the EXPLAIN and the table schema I'd appreciate
confirmation that this list is the appropriate forum.
It is and and useful things to show are
* the slow query
* EXPLAIN ANALYZE of the query
* the output of \d for each table involved in the
On Tue, 30 Aug 2005, Hemant Pandey wrote:
So please tell me how can i improve database performance through
configuration parameters. I had tried to change parameters in
postgresql.conf file but of no avail.
Now i am trying to Auto Vacuum, but don't know how to run Auto Vacuum.
The most
On Tue, 16 Aug 2005, Ulrich Wisser wrote:
Still when several users are on line the reporting gets very slow.
Queries can take more then 2 min.
Could you show an exampleof such a query and the output of EXPLAIN ANALYZE
on that query (preferably done when the database is slow).
It's hard to
On Wed, 27 Jul 2005, Matthew Schumacher wrote:
Then they do this to insert the token:
INSERT INTO bayes_token (
id,
token,
spam_count,
ham_count,
atime
) VALUES (
?,
?,
?,
?,
?
) ON DUPLICATE KEY
UPDATE
spam_count = GREATEST(spam_count + ?, 0),
On Thu, 30 Jun 2005, John Mendenhall wrote:
Our setting for effective_cache_size is 2048.
random_page_cost = 4, effective_cache_size = 2048 time approximately 4500ms
random_page_cost = 3, effective_cache_size = 2048 time approximately 1050ms
random_page_cost = 3, effective_cache_size =
On Fri, 1 Jul 2005, Sam Mason wrote:
The key thing with the query that Sam have is that if you turn off seqscan
you get the first plan that run in 0.4ms and if seqscan is on the runtime
is 27851ms.
There are 100 way to make it select the seq scan, including rewriting the
query to something more
On Thu, 23 Jun 2005, Bricklen Anderson wrote:
iii. UNIQUE constraint on table t1. This didn't seem to perform too
badly with fewer rows (preliminary tests), but as you'd expect, on error
the whole transaction would roll back. Is it possible to skip a row if
it causes an error, as opposed to
On Wed, 15 Jun 2005, Todd Landfried wrote:
So, what I need is to be pointed to (or told) what are the best
settings for our database given these memory configurations. What
should we do?
Maybe this will help:
http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html
NOTICE:
On Thu, 9 Jun 2005 [EMAIL PROTECTED] wrote:
I am continously encountering an issue with query plans that changes after
a pg_dump / pg_restore operation has been performed.
Have placed an export of the query, query plan etc. online at:
http://213.173.234.215:8080/plan.htm in order to
On Thu, 9 Jun 2005, Jona wrote:
It's the same (physical) server as well as the same PostGreSQL daemon,
so yes.
The only thing that can differ then is the statistics collected and the
amount of dead space in tables and indexes (but since you both reindex and
run vacuum full that should not be
On Fri, 6 May 2005, Jim C. Nasby wrote:
Has thought been given to supporting inserting multiple rows in a single
insert? DB2 supported:
INSERT INTO table VALUES(
(1,2,3),
(4,5,6),
(7,8,9)
);
I'm not sure how standard that is or if other databases support it.
The sql
On Fri, 8 Oct 2004, Josh Berkus wrote:
As you can see, the sweet spot appears to be between 5% and 10% of RAM,
which is if anything *lower* than recommendations for 7.4!
What recommendation is that? To have shared buffers being about 10% of the
ram sounds familiar to me. What was
On Wed, 22 Sep 2004, Gaetano Mendola wrote:
Limit (cost=10.21..10.21 rows=1 width=24) (actual time=0.885..0.885 rows=0
loops=1)
Limit (cost=10.28..10.28 rows=1 width=24) (actual time=0.429..0.429 rows=0
loops=1)
These estimated costs are almost the same, but the runtime differs a
On 22 Sep 2004, Greg Stark wrote:
Actually this looks like it's arguably a bug to me. Why does the hash
join execute the sequential scan at all? Shouldn't it also like the
merge join recognize that the other hashed relation is empty and skip
the sequential scan entirely?
I'm not sure you can
On Sat, 11 Sep 2004, Mark Cotner wrote:
There are no performance concerns with MySQL, but it would benefit
greatly from stored procedures, views, etc. It is a very large rolling
data warehouse that inserts about 4.5 million rows every 2 hours and
subsequently rolls this data off the back end
On Fri, 27 Aug 2004, Artimenko Igor wrote:
1. Sequential search and very high cost if set enable_seqscan to on;
Seq scan on messageinfo ( cost=0.00..24371.30, rows =36802 )
2. Index scan but even bigger cost if set enable_seqscan to off;
Index messagesStatus on messageinfo (
On Wed, 25 Aug 2004, Richard Huxton wrote:
Index Scan using trptserc on trans (cost=0.00..465.10 rows=44 width=118)
Index Cond: (trn_patno = 19)
Filter: ((trn_old_date = '1994-08-23'::date) AND (trn_old_date =
'2004-08-23'::date) AND (trn_bill_inc = 'B'::bpchar))
(687 rows)
On Tue, 3 Aug 2004, Martin Foster wrote:
to roughly 175 or more. Essentially, the machine seems to struggle
to keep up with continual requests and slows down respectively as
resources are tied down.
I suggest you try to find queries that are slow and check to see if the
plans are
On Tue, 29 Jun 2004, James Antill wrote:
- Index Scan using idx_ticket_groups_assigned on ticket_groups g
(cost=0.00..241.76 rows=5 width=20) (actual time=0.13..12.67 rows=604 loops=1)
Index Cond: (assigned_to = 1540)
Here the planner estimated that it
On Tue, 29 Jun 2004, Harald Lau (Sector-X) wrote:
Average and sum can never use an index AFAIK, in any db server. You
need information from every row.
Take a look at the SQLSrv-pendant:
create index x_1 on the_table (num_found)
select avg(num_found) from the_table
- Index
On Tue, 11 May 2004, Bjoern Metzdorf wrote:
I am curious if there are any real life production quad processor setups
running postgresql out there. Since postgresql lacks a proper
replication/cluster solution, we have to buy a bigger machine.
Du you run the latest version of PG? I've read
On Fri, 30 Apr 2004, Gary Doades wrote:
I should have also pointed out that MSSQL reported that same index scan
as taking 65% of the overall query time. It was just faster. The
overall query took 103ms in MSSQL.
Are your results based on a single client accessing the database and no
On Sat, 17 Apr 2004, Tom Lane wrote:
*some* set of inputs. (Also, I have been harboring some notions of
supporting cross-type hash joins for integer types, which will not work
unless small int8 values hash the same as int4 etc.)
The simple solution would be to always extend integers to 64
On Sun, 18 Apr 2004, Tom Lane wrote:
That creates portability issues though. We do not depend on there being
a 64-bit-int type for anything except int8 itself, and I don't want to
start doing so.
What do you mean? int8 is supported on all platformas and if the
hasfunction would convert all
On Sun, 18 Apr 2004, Tom Lane wrote:
What do you mean? int8 is supported on all platformas
No it isn't.
So on platforms where it isn't you would use int4 as the biggest int then.
I don't really see that as a problem. As long as you calculate the hash on
the biggest int on that platform it
On Sun, 18 Apr 2004, Bruno Wolff III wrote:
Another option would be to put the numbers into two int4s. For int4 or
smaller types one of these would be zero. int8s would be split between
the two. The hash function would then be defined on the two int4s.
Sure, this is an internal calculation in
On Thu, 8 Apr 2004, Cosimo Streppone wrote:
The alternative solution I tried, that has an optimal
speed up, unfortunately is not a way out, and it is based
on EXPLAIN SELECT count(*) output parsing, which
is obviously *not* reliable.
Try this to get the estimate:
SELECT relname,
On Wed, 7 Apr 2004, huang yaqin wrote:
You said turning fsync off may cause losing data, that's terrible. I use
SCSI disk, and file system is ext3. I tune postgresql.conf and can't get
any improvement. So what can I do?
Make sure you do as much as possible inside one transaction. If you want
On Mon, 8 Mar 2004, Chris Smith wrote:
assetid| integer | not null default 0
assetid| character varying(255) | not null default '0'
The types above does not match, and these are the attributes you use to
join.
--
/Dennis Björklund
On Fri, 5 Mar 2004, Jeff wrote:
Is there any performance advantage to using a fixed width row (with PG)?
As far as I know there is only a small win when you want to extract some
field from a tuple and with variable width fields you have to walk to the
correct field. But this is a small
On Thu, 29 Jan 2004, Tom Lane wrote:
jackdb-# GROUP BY memberid_ HAVING (
Um, that's not what I had in mind at all. Does GROUP BY actually do
anything at all here? (You didn't answer me as to whether memberid_
is a unique identifier or not, but if it is, this GROUP BY is just an
On Thu, 29 Jan 2004, Jack Coates wrote:
Probably better to repost it as a gzip'd attachment. That should
complete with a picture of the GUI version. 26k zipped, let's see if
this makes it through.
Are you sure you attached it?
At least when it got here there was no attachment.
--
On Wed, 21 Jan 2004, Jeroen Baekelandt wrote:
jms_messages again. It takes 80 seconds!?! While before, with 1000
records, it took only a fraction of a second.
run: VACUUM FULL ANALYZE;
--
/Dennis Björklund
---(end of broadcast)---
TIP 7:
On Fri, 9 Jan 2004, Richard van den Berg wrote:
problems. However, I've run into an issue where explain tells us a the
costs of a quiry are tremendous (105849017586), but the query actually
runs quite fast. Even explain analyze shows these costs.
It would be helpful if you can show the
On Sun, 11 Jan 2004, Andrew Rawnsley wrote:
20-25% of the time. Fiddling with CPU_TUPLE_COST doesn't do anything
until I exceed 0.5, which strikes me as a bit high (though please
correct me if I am assuming too much...). RANDOM_PAGE_COST seems to have
no effect.
What about the effective
On Mon, 29 Dec 2003, Sean Shanny wrote:
The first plan below has horrendous performance. we only get about 2%
CPU usage and iostat shows 3-5 MB/sec IO. The second plan runs at 30%
cpu and 15-30MB.sec IO.
Could someone shed some light on why the huge difference in
performance? Both
On Thu, 18 Dec 2003, Shridhar Daithankar wrote:
Well, then the only issue left is file sytem defragmentation.
And the internal fragmentation that can be fixed with the CLUSTER
command.
--
/Dennis
---(end of broadcast)---
TIP 7: don't forget
On Tue, 16 Dec 2003, David Shadovitz wrote:
I backed up my database using pg_dump, and then restored it onto a different
server using psql. I see that the query SELECT COUNT(*) FROM myTable
executes immediately on the new server but takes several seconds on the old
one. (The servers are
On Sat, 13 Dec 2003, Kari Lavikka wrote:
I evaluated pg 7.4 on our development server and it looked just fine
but performance with production loads seems to be quite poor. Most of
performance problems are caused by nonsensical query plans
Some of the estimates that pg made in the plans you
On Mon, 24 Nov 2003, Torsten Schulz wrote:
sort_mem = 32768 # min 32
32 meg per sort can be a lot in total if you have many clients sorting
things. I assume you have checked so that the computer is not pushed into
swapping when you have the peak with lots of users. A swapping
On Fri, 14 Nov 2003, Tom Lane wrote:
I believe the previous discussion also agreed that we wanted to postpone
the freezing of now(), which currently also happens at BEGIN rather than
the first command after BEGIN.
Or should that happen at the first call to now()?
/me should ge back and try
On Mon, 10 Nov 2003, Marc G. Fournier wrote:
explain analyze SELECT ts.company_id, company_name, SUM(ts.bytes) AS total_traffic
FROM company c, traffic_logs ts
WHERE c.company_id = ts.company_id
AND month_trunc(ts.runtime) = '2003-10-01'
GROUP BY company_name,ts.company_id;
On Fri, 10 Oct 2003, [iso-8859-15] Hervé Piedvache wrote:
A simple question about PostgreSQL ... I have a Pentium Xeon Quadri
processors ... If I do a SQL request ... does PostgreSQL use one or more
processor ?
Each connection becomes a process, and each process runs on one processor.
So,
On Thu, 9 Oct 2003, David Griffiths wrote:
PostgreSQL supports constraints. MySQL doesn't; programmers need to
take care of that from the client side
Again, InnoDB supports constraints.
Really? This is news. We did some tests on constraints on InnoDB, and
found that while they
On Tue, 23 Sep 2003, Bruce Momjian wrote:
With the new warning about too-frequent checkpoints, people have actual
feedback to encourage them to increase checkpoint_segments. One issue
is that it is likely to recommend increasing checkpoint_segments during
restore, even if there is no value
On Wed, 10 Sep 2003, Chris Huston wrote:
Sometime before then I need to figure out how to cut the fetch times
from the now 200ms to something like 10ms.
You didn't say anything about Joshs first point of adjusting
postgresql.conf to match your machine. Settings like effective_cache_size
you
53 matches
Mail list logo