Thank you.
How about:
select c.relname, l.pid, l.mode, l.granted,
a.current_query
from pg_locks l, pg_class c, pg_stat_activity a
where
l.relation = c.oid
AND l.pid = a.procpid
order by l.granted, l.pid;
relname | pid |
mode | granted |
hi,
Paul Serby wrote:
Can anyone give a good reference site/book for getting the most out of
your postgres server.
All I can find is contradicting theories on how to work out your settings.
This is what I followed to setup our db server that serves our web
applications.
Having trouble with one table (see time to count records below!).
Fairly new to postgres so any help much appreciated.
It only contains 9,106 records - as you can see from:
select count(id) from project
x-tad-biggercount/x-tad-biggerx-tad-bigger
/x-tad-biggerx-tad-bigger9106
1 row(s)
Total
Anyway, with fsync enabled using standard fsync(), I get roughly
300-400
inserts per second. With fsync disabled, I get about 7000 inserts per
second. When I re-enable fsync but use the open_sync option, I can get
about 2500 inserts per second.
You are getting 300-400 inserts/sec with fsync
Paul Langard [EMAIL PROTECTED] writes:
Having trouble with one table (see time to count records below!).
Fairly new to postgres so any help much appreciated.
It only contains 9,106 records - as you can see from:
select count(id) from project
count
9106
1 row(s)
Total runtime:
Squid also takes away the work of doing SSL (presuming you're running it
on a different machine). Unfortunately it doesn't support HTTP/1.1 which
means that most generated pages (those that don't set Content-length) end
up forcing squid to close and then reopen the connection to the web
Paul,
Paul Langard wrote:
Having trouble with one table (see time to count records below!).
Fairly new to postgres so any help much appreciated.
It only contains 9,106 records - as you can see from:
select count(id) from project
*count
*9106
1 row(s)
Total runtime: 45,778.813 ms
snip
... the
Josh Berkus wrote:
Does the order of columns of varying size have any effect on
SELECT/INSERT/UPDATE/and/or/DELETE performance? Take the example where
an integer primary key is listed first in the table and alternatively
listed after some large varchar or text columns?
No, the order of the
Paul Langard [EMAIL PROTECTED] writes:
select count(id) from project
count
9106
1 row(s)
Total runtime: 45,778.813 ms
Yipes. The only explanation I can think of is tremendous table bloat.
What do you get from vacuum verbose project --- in particular, how
many pages in the table?
... the
Bill,
Does the order of columns of varying size have any effect on
SELECT/INSERT/UPDATE/and/or/DELETE performance? Take the example where
an integer primary key is listed first in the table and alternatively
listed after some large varchar or text columns?
No, the order of the columns in the
Guys, just so you know:
OSDL did some testing and found Ext3 to be perhaps the worst FS for PostgreSQL
-- although this testing was with the default options. Ext3 involved an
almost 40% write performance penalty compared with Ext2, whereas the penalty
for ReiserFS and JFS was less than 10%.
Litao Wu [EMAIL PROTECTED] writes:
How about:
select c.relname, l.pid, l.mode, l.granted,
a.current_query
from pg_locks l, pg_class c, pg_stat_activity a
where
l.relation = c.oid
AND l.pid = a.procpid
order by l.granted, l.pid;
You can't join to pg_class without eliminating the
Hi Rod,
Actually, we're already using a substantial caching system in code for
nearly all pages delivered - we've exhausted that option. Our system uses a
login/session table for about 1/8 of our page views (those visitors who are
logged in), and has tracking features. While I'd love to scrap
On Tue, 2004-08-10 at 13:17, Jason Coene wrote:
Hi All,
We're currently running Postgres 7.4.1 on FreeBSD 5.2, a Dual Xeon 2.4, 2GB
ECC, 3Ware Serial ATA RAID 5 w/ 4 disks (SLOW!!).
Our database is about 20GB on disk, we have some quite large tables - 2M
rows with TEXT fields in a sample
Our database is about 20GB on disk, we have some quite large tables - 2M
rows with TEXT fields in a sample table, accessed constantly. We average
about 4,000 - 5,000 queries per second - all from web traffic. As you can
99% is reads? and probably the same data over and over again? You might
Hi,
I have a question on bulk checking, inserting into a table and
how best to use an index for performance.
The data I have to work with is a monthly CD Rom csv data dump of
300,000 property owners from one area/shire.
So every CD has 300,000 odd lines, each line of data which fills the
Does that mean reindex is not needed
for PG version 7.4?
In what kind situations under PG 7.4,
reindex is worthwhile?
Thanks,
Here is doc from 7.3:
PostgreSQL is unable to reuse B-tree index pages in
certain cases. The problem is that if indexed rows are
deleted, those index pages can only
If the bulk load has the possibility of duplicating data, then you need
to change methods. Try bulk loading into a temp table, index it like
the original, eliminate the dups and merge the tables.
It is also possible to do an insert from the temp table into the final
table like:
insert into
Usualy any bulk load is faster with indexes dropped and the rebuilt ... failing that
(like you really need the indexes while loading, say into a hot table) be sure to
wrap all the SQL into one transaction (BEGIN;...COMMIT;) ... if any data failes it all
fails, which is usually easier to deal
Hi Jim,
Thanks for your time.
If the bulk load has the possibility of duplicating data
Yes, each row will require either:
a) One SELECT + One INSERT
or
b) One SELECT + One UPDATE
I did think of using more than one table, ie. temp table.
As each month worth of data is added I expect to see
a
In an attempt to throw the authorities off his trail, [EMAIL PROTECTED] (Rudi
Starcevic) transmitted:
A minute for your thoughts and/or suggestions would be great.
Could you give a more concrete example? E.g. - the DDL for the
table(s), most particularly.
At first guess, I think you're
If it has to read a majority (or even a good percentage) of the rows in question a
sequential scan is probably faster ... and as Jim pointed out, a temp table can often
be a useful medium for getting speed in a load and then allowing you to clean/alter
data for a final (easy) push.
G
Hi,
In an attempt to throw the authorities off his trail, [EMAIL PROTECTED] (Rudi
Starcevic) transmitted:
A minute for your thoughts and/or suggestions would be great.
Heh heh
Could you give a more concrete example? E.g. - the DDL for the
table(s), most particularly.
Thanks, I didn't add
Greetings.
I have a question regarding performance of certain datatypes:
I have a field where I will store my clients phone numbers. I know that
this field will never exceed 15 characters, and I will store only
numbers here (no dashes, dots, etc...), so I was wondering:
Wich type is faster:
24 matches
Mail list logo