Hi,
I've experienced that PG up to current release does not make use of an index when
aggregating. Which of course may result in unacceptable answering times
This behaviour is reproducable on any table with any aggregat function in all of my
databases on every machine (PostgreSQL 7.4.2 on
Wow, this simple query is taking 676.24 ms to execute! it only takes
18 ms on our other machine.
This table has 150,000 rows. Is this normal?
no, the machine is not running software RAID. Anyone have any ideas
next as to what I should do to debug this? I'm really wondering if the
Linux OS
live=# explain analyze SELECT id FROM calllogs WHERE from = 'you';
QUERY PLAN
--
Seq Scan on calllogs (cost=0.00..136.11 rows=24 width=4)
Bill wrote:
Okso here lies the output of oclh (i.e \d oclh)
Table public.oclh
Column | Type | Modifiers
+---+---
symbol | character varying(10) | not null default ''
date |
ok i just vacuumed it and it's taking slightly longer now to execute
(only about 8 ms longer, to around 701 ms).
Not using indexes for calllogs(from)... should I? The values for
calllogs(from) are not unique (sorry if I'm misunderstanding your
point).
Thanks,
Chris
On Tue, 29 Jun 2004
@Chris:
SELECT count(*) FROM the_table
= Seq scan - takes about 12 sec
This cannot be made O(1) in postgres due to MVCC. You just have to live
with it.
bad news
BTW: in this case you could workaround
select reltuples from pg_class where relname='the_table'
(yes, I know: presumes a
On Tue, 29 Jun 2004, Harald Lau (Sector-X) wrote:
Average and sum can never use an index AFAIK, in any db server. You
need information from every row.
Take a look at the SQLSrv-pendant:
create index x_1 on the_table (num_found)
select avg(num_found) from the_table
- Index
I am experiencing rather slow INSERTs on loaded server. The table I am
inserting to is:
CREATE TABLE pagestats
(
page_id int4 NOT NULL,
viewed timestamptz DEFAULT now(),
session int4 NOT NULL
) WITH OIDS;
The table is populated with 700k rows. It is VACUUM ANALYZED every
night, though it
On Tue, Jun 29, 2004 at 01:37:30 -0700,
Chris Cheston [EMAIL PROTECTED] wrote:
ok i just vacuumed it and it's taking slightly longer now to execute
(only about 8 ms longer, to around 701 ms).
Not using indexes for calllogs(from)... should I? The values for
calllogs(from) are not unique
On Tue, Jun 29, 2004 at 10:46:27 +0200,
Harald Lau (Sector-X) [EMAIL PROTECTED] wrote:
h...
So, it seems that PG is not s well suited for a datawarehouse and/or performing
extensive statistics/calculations/reportings on large tables, is it?
If you are doing lots of selects of
Chris Cheston [EMAIL PROTECTED] writes:
Wow, this simple query is taking 676.24 ms to execute! it only takes
18 ms on our other machine.
This table has 150,000 rows. Is this normal?
live=# explain analyze SELECT id FROM calllogs WHERE from = 'you';
=?ISO-8859-2?Q?Michal_T=E1borsk=FD?= [EMAIL PROTECTED] writes:
I am experiencing rather slow INSERTs on loaded server.
... There are no indices, triggers or constraints attached to it.
It's hard to see how inserting to such a simple table would be slow.
Sometimes, it takes as long as 1300ms!
Tom Lane wrote:
It's hard to see how inserting to such a simple table would be slow.
Indeed.
Is the number of inserts between slowdowns perfectly repeatable? My
first thought is that the fast case is associated with inserting onto a
page that is the same one last inserted to, and the slow case is
Don't know about Oracle, but select-distinct in MSSQL2K will indeed throw
away duplicates, which chops the CPU time. Very easy to see in the graphic
query plan, both in terms of CPU and the number of rows retrieved from a
single-node or nested-loop subtree. Definitely a worthwhile optimization.
Michal Taborsky [EMAIL PROTECTED] writes:
I've read the discussion in Trying to minimize the impact of
checkpoints thread and I get it, that there is nothing I can do about
it. Well, we'll have to live with that, at least until 7.5.
You could experiment with the checkpoint interval
Is the from field nullable? If not, try create index calllogs_from on
calllogs ( from ); and then do an explain analyze of your query.
Gavin
Chris Cheston wrote:
ok i just vacuumed it and it's taking slightly longer now to execute
(only about 8 ms longer, to around 701 ms).
Not using indexes
Michal Taborsky [EMAIL PROTECTED] writes:
I ran some tests to support this hypothesis. Every 500th insert is a tad
slower, but it is insignificant (normally the INSERT lasts 1.5ms, every
500th is 9ms). During my tests (10 runs of 1000 INSERTS) I had
experienced only one slow insert
[EMAIL PROTECTED] wrote:
I'm using PostgreSQL 7.4.2 (package from backports.org)
on a Debian (woody) box. The machine is IBM eServer 345
with two 2.8 Xeon CPUs, it has 1024MB of RAM and
two 15k RPM SCSI disks running in hardware RAID1, which
is provided by the onboard LSI Logic controller
Laurent == Laurent Rathle [EMAIL PROTECTED] writes:
Laurent Le lundi 28 Juin 2004 10:14, Sylvain Lhullier a écrit :
Pour info, le vidéo-proj de Parinux devrait rejoindre les RMLL
(sauf opposition du CA).
Nous sommes en train de trouver un moyen de l'y apporter
(sachant
On Tue, 2004-06-29 at 02:46, Harald Lau (Sector-X) wrote:
@Chris:
SELECT count(*) FROM the_table
= Seq scan - takes about 12 sec
This cannot be made O(1) in postgres due to MVCC. You just have to live
with it.
bad news
BTW: in this case you could workaround
select reltuples
Hello All,
We are building a web based application which is database
intensive (we intend to use postgresql). Expect about 600
concurrent users. We are using Zope and python with postgresql on
RedHat Enterprise Linux. Our server has dual intel xeon 2.4 GHz
and 2 Gig Ram with lots of
On Tue, Jun 29, 2004 at 09:17:36AM -0700, Marc wrote:
Performance issue, I'm experiencing here, is somewhat
weird - server gets high average load (from 5 up to 15,
8 on average). Standard performance monitoring
utilities (like top) show that CPUs are not loaded
(below 20%, often near
Ok, thanks. So let me explain the query number 2 as this is the more
difficult to write. So I have a list of stocks, this table contains the
price of all of the stocks at the open and close date. Ok, now we have a
ratio from query (1) that returns at least a very rough index of the daily
Note that there ARE other options. While the inability to provide a
speedy count is a cost of using an MVCC system, the ability to allow
thousands of readers to run while updates are happening underneath them
more than makes up for the slower aggregate performance.
IMO this depends on the
Bill wrote:
Ok, thanks. So let me explain the query number 2 as this is the more
difficult to write. So I have a list of stocks, this table contains the
price of all of the stocks at the open and close date. Ok, now we have a
ratio from query (1) that returns at least a very rough index of the
I'm trying to make a (qua-technical, qua-business) case for switching from
MS SQL, and one of the types of query that really doesn't sit well with MS
SQL2K is:
-- All fields integers or equivalent.
-- Table T(k, x: nonkey fields...)
-- Table U(k, a, z: m)-- for each value of (k) a set of
On Tue, Jun 29, 2004 at 12:33:51 -0500,
Bill [EMAIL PROTECTED] wrote:
Ok, thanks. So let me explain the query number 2 as this is the more
difficult to write. So I have a list of stocks, this table contains the
price of all of the stocks at the open and close date. Ok, now we have a
ratio
On Tue, 2004-06-29 at 09:55, [EMAIL PROTECTED] wrote:
Hello,
I'm using PostgreSQL 7.4.2 (package from backports.org)
on a Debian (woody) box. The machine is IBM eServer 345
with two 2.8 Xeon CPUs, it has 1024MB of RAM and
two 15k RPM SCSI disks running in hardware RAID1, which
is provided
Eleven,
In particular - could someone tell me if those iostat
values can tell if I'm close to upper performance boundary
of fast SCSI (Ultra 320, 15k RPM) disks?
It's quite possible that you need to improve your disk array; certainly I
would have spec'd a lot more disk than you're using
Mohan,
I am looking at good rules to use for memory, WAL, SQL Transaction
Isolation Levels and cache configuration along with anything else
that I have missed.
http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html
For anything beyond that, you're asking for a service I charge
30 matches
Mail list logo