Nitpicking --
Perhaps the 4th data line is meant to be:
Inserts in separate transactions 2500 inserts/second
^^^
??
Greg Williamson
-Original Message-
From: Bruce Momjian [mailto:[EMAIL PROTECTED]
Sent: Tue 9/9/2003 8:25 PM
To:
Why is my name on a mail from Tom Lane ? Really, he knows a *lot* more than I and
should get due credit.
Seriously, is this the peformance remailer mangling something ?
Greg Williamson
(the real one)
-Original Message-
From: Gregory S. Williamson
Sent: Sun 6/6/2004 10:46 PM
Usualy any bulk load is faster with indexes dropped and the rebuilt ... failing that
(like you really need the indexes while loading, say into a hot table) be sure to
wrap all the SQL into one transaction (BEGIN;...COMMIT;) ... if any data failes it all
fails, which is usually easier to deal
If it has to read a majority (or even a good percentage) of the rows in question a
sequential scan is probably faster ... and as Jim pointed out, a temp table can often
be a useful medium for getting speed in a load and then allowing you to clean/alter
data for a final (easy) push.
G
Not sure about the overall performance, etc. but I think that in order to collect
statistics you need to set some values in the postgresql.conf config file, to wit:
#---
# RUNTIME STATISTICS
FWIW,
Informix does allow the fragmentation of data over named dbspaces by round-robin and
expression; this is autosupporting as long as the dba keeps enough space available.
You may also fragment the index although there are some variations depending on type
of Informix (XPS, etc.); this is
If you have set up the postgres instance to write stats, the tables
pg_stat_user_indexes, pg_statio_all_indexes and so (use the \dS option at the psql
prompt to see these system tables); also check the pg_stat_user_tables table and
similar beasts for information on total access, etc. Between
Igor,
I'm not sure if it is proper to state that schemas are themselves speeding things up.
As an example, we have data that is usually accessed by county; when we put all of the
data into one big table and select from it using a code for a county of interest, the
process is fairly slow as
Rodrigo --
You should definitely drop the indexes and any other FK constraints before
loading and then rebuild them. Check your logs and see if there are warnings
about checkpoint intervals -- only 3 logs seems like it might be small; if you
have the disk space I would definitely consider
Amrit --
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Mon 1/3/2005 12:18 AM
To:Mark Kirkwood
Cc:PGsql-performance
Subject: Re: [PERFORM] Low Performance for big hospital server ..
shared_buffers = 12000 will use 12000*8192 bytes (i.e about
As a sometimes Informix and PostgreSQL DBA, I disagree with the contentions
below. We have many tables with 10s of millions of rows in Postgres. We have
had (alas) power issues with our lab on more than one occasion and the
afflicted servers have recovered like a champ, every time.
This person
Forgive the cross-posting, but I found myself wondering if might not be some
way future way of telling the planner that a given table (column ?) has a high
likelyhood of being TOASTed. Similar to the random_page_cost in spirit. We've
got a lot of indexed data that is spatial and have some
1.451 ms = 1.451 milliseconds
1451.0 ms = 1.451 seconds ...
so 32.918 ms for a commit seems perhaps reasonable ?
Greg Williamson
DBA
GlobeXplorer LLC
-Original Message-
From: [EMAIL PROTECTED] on behalf of Zeugswetter Andreas DCP SD
Sent: Thu 5/11/2006 12:55 AM
To: Jim C.
That fsync off would make me very unhappy in a production environment not
that turning it on would help postgres, but ... one advantage of postgres is
its reliability under a pull the plug scenario, but this setting defeats that.
FWIW, Xeon has gotten quite negative reviews in these
A sodden late night idea ... schemas don't need to have names that are
meaningful to outsiders.
Still, the point about political aspects is an important one. OTH, schemas
provide an elegant way of segregating data.
My $0.02 (not worth what it was)
Greg Williamson
DBA
GlobeXplorer LLC
Off hanbd I can't recommend anything, bur perhaps you could post the details of
the tables (columns, indexes),and some info on what version of postgres you are
using.
Are the tables recently analyzed ? How many rows in them ?
Greg Williamson
DBA
GlobeXplorer LLC
-Original Message-
Based on what other people have posted, hyperthreading seems not to be
beneficial for postgres -- try searching through the archives of this list.
(And then turn it off and see if it helps.)
You might also post a few details:
config settings (shared_buffers, work_mem, maintenance_work_mem, wal
Operating system and some of the basic PostreSQL config settings would be
helpful, plus any info you have on your disks, the size of the relevant tables,
their structure and indexes vacuum/analyze status ... plus what others have
said:
Upgrade!
There are considerable improvements in, well,
If your data is valuable I'd recommend against RAID5 ... see
http://www.miracleas.com/BAARF/RAID5_versus_RAID10.txt
performance aside, I'd advise against RAID5 in almost all circumstances. Why
take chances ?
Greg Williamson
DBA
GlobeXplorer LLC
-Original Message-
From: [EMAIL
(Re)-Design it to do both, unless there's reason to believe that doing one
after the other would skew the results.
Then old results are available, new results are also visible and useful for
future comparisons. And seeing them side by side mught be an interesting
exercise as well, at least for
This is a query migrated from postgres. In postgres it runs about 10,000 times
*slower* than on informix on somewhat newer hardware. The problem is entirely
due to the planner. This PostgreSQL 8.1.4 on linux, 2 gigs of ram.
The table:
Table reporting.bill_rpt_work
Column |
-Original Message-
From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]
Sent: Tue 1/9/2007 4:35 AM
To: Gregory S. Williamson
Cc: pgsql-performance@postgresql.org
Subject:Re: [PERFORM] Horribly slow query/ sequential scan
I don't think I understand the idea behind this query. Do you
:[EMAIL PROTECTED]
Sent: Tue 1/9/2007 4:50 AM
To: [EMAIL PROTECTED]; Gregory S. Williamson
Cc: pgsql-performance@postgresql.org
Subject:AW: [PERFORM] Horribly slow query/ sequential scan
Forget abount IN. Its horribly slow.
try :
select w.appid,
w.rate,
w.is_subscribed
23 matches
Mail list logo