I Maked a new install on machine this night, and the same results, on
console localhost
Windows 2000 Server
Version 5.00.2195
PG Version 8.1
Franklin
Franlin: are you making pg_dump from local or remote box and is this a
clean install? Try fresh patched win2k install and see what
On Thu, Dec 01, 2005 at 02:50:56PM +0900, Tatsumi Abe wrote:
Could anyone advise some solutions for this fragmentation problem
without stopping PostgreSQL ?
Stop doing VACUUM FULL so often. If your table size is constant anyway
you're just wasting time by compacting the table and shrinking it,
On Dec 1, 2005, at 00:50, Tatsumi Abe wrote:
However, In my current environment I can't stop PostgreSQL and cancel
fragmentation.
Could anyone advise some solutions for this fragmentation problem
without stopping PostgreSQL ?
This is somewhat of an aside and intended just as a helpful
Hi there,
I need a simple but large table with several million records. I do batch
inserts with JDBC. After the first million or so records,
the inserts degrade to become VERY slow (like 8 minutes vs initially 20
secondes).
The table has no indices except PK while I do the inserts.
This is
Hi,
we are currently running a postgres server (upgraded to 8.1) which has
one large database with approx. 15,000 tables. Unfortunately performance
suffers from that, because the internal tables (especially that which
holds the attribute info) get too large.
(We NEED that many tables,
Wolfgang Gehner wrote:
Hi there,
I need a simple but large table with several million records. I do batch
inserts with JDBC. After the first million or so records,
the inserts degrade to become VERY slow (like 8 minutes vs initially 20
secondes).
The table has no indices except PK while I
On Thu, 1 Dec 2005, Michael Riess wrote:
Hi,
we are currently running a postgres server (upgraded to 8.1) which has one
large database with approx. 15,000 tables. Unfortunately performance suffers
from that, because the internal tables (especially that which holds the
attribute info) get
this subject has come up a couple times just today (and it looks like one
that keeps popping up).
under linux ext2/3 have two known weaknesses (or rather one weakness with
two manifestations). searching through large objects on disk is slow, this
applies to both directories (creating,
Hi David,
incidentally: The directory which holds our datbase currently contains
73883 files ... do I get a prize or something? ;-)
Regards,
Mike
---(end of broadcast)---
TIP 5: don't forget to increase your free space map settings
Hi David,
with 15,000 tables you are talking about a LOT of files to hold these
(30,000 files with one index each and each database being small enough
to not need more then one file to hold it), on linux ext2/3 this many
files in one directory will slow you down horribly.
We use
On Thu, 1 Dec 2005, Richard Huxton wrote:
Tatsumi Abe wrote:
Question is about the relation between fragmentation of file and VACUUM
performance.
Environment
OS:RedHat Enterprise Linux AS Release 3(Taroon Update 6)
Kernel 2.4.21-37.ELsmp on an i686
Filesystem Type ext3
Filesystem
On 12/1/05, Michael Riess [EMAIL PROTECTED] wrote:
Hi,
we are currently running a postgres server (upgraded to 8.1) which has
one large database with approx. 15,000 tables. Unfortunately performance
suffers from that, because the internal tables (especially that which
holds the attribute
Hi!
I've got an urgent problem with an application which is evaluating a
monthly survey; it's running quite a lot of queries like this:
select SOURCE.NAME as TYPE,
count(PARTICIPANT.SESSION_ID) as TOTAL
from (
select PARTICIPANT.SESSION_ID
from survey.PARTICIPANT,
Wolfgang Gehner [EMAIL PROTECTED] writes:
This is with PostgreSQL 8.0 final for WindowsXP on a Pentium 1.86 GHz,
1GB Memory. HD is fast IDE.
Try something more recent, like 8.0.3 or 8.0.4. IIRC we had some
performance issues in 8.0.0 with tables that grew from zero to large
size during a
Michael Riess [EMAIL PROTECTED] writes:
(We NEED that many tables, please don't recommend to reduce them)
No, you don't. Add an additional key column to fold together different
tables of the same structure. This will be much more efficient than
managing that key at the filesystem level, which
On 12/1/05, Tom Lane [EMAIL PROTECTED] wrote:
Michael Riess [EMAIL PROTECTED] writes:
(We NEED that many tables, please don't recommend to reduce them)
No, you don't. Add an additional key column to fold together different
tables of the same structure. This will be much more efficient than
Franlin: are you making pg_dump from local or remote box and is this
a
clean install? Try fresh patched win2k install and see what happens.
He claimed this was local, not network. It is certainly an
intriguing possibility that W2K and WinXP handle bytea
differently. I'm not competent to
Hi,
On 12/1/05, Michael Riess [EMAIL PROTECTED] wrote:
Hi,
we are currently running a postgres server (upgraded to 8.1) which has
one large database with approx. 15,000 tables. Unfortunately performance
suffers from that, because the internal tables (especially that which
holds the attribute
Markus Wollny [EMAIL PROTECTED] writes:
My current problem is that rewriting hundreds of queries, some of them
quite a bit more complex than this one, but all of them using the same
general scheme, would take quite a lot of time - and I'm expected to
hand over the survey results ASAP. So I
-Ursprüngliche Nachricht-
Von: Tom Lane [mailto:[EMAIL PROTECTED]
Gesendet: Donnerstag, 1. Dezember 2005 17:26
An: Markus Wollny
Cc: pgsql-performance@postgresql.org
Betreff: Re: [PERFORM] Queries taking ages in PG 8.1, have
been much faster in PG=8.0
It looks like set
On 01.12.2005, at 17:04 Uhr, Michael Riess wrote:
No. Our database contains tables for we content management systems.
The server hosts approx. 500 cms applications, and each of them has
approx. 30 tables.
Just for my curiosity: Are the about 30 tables with similar schemas
or do they
Markus Wollny [EMAIL PROTECTED] writes:
Once you're not under deadline,
I'd like to investigate more closely to find out why 8.1 does
worse than 8.0 here.
Please tell me what I can do to help in clearing up this issue, I'd be
very happy to help!
The first thing to do is get 8.0's EXPLAIN
Hi Tom,
Michael Riess [EMAIL PROTECTED] writes:
(We NEED that many tables, please don't recommend to reduce them)
No, you don't. Add an additional key column to fold together different
tables of the same structure. This will be much more efficient than
managing that key at the filesystem
hi michael
Have you ANALYZEd your database? VACUUMing?
Of course ... before 8.1 we routinely did a vacuum full analyze each
night. As of 8.1 we use autovacuum.
what i noticed is autovacuum not working properly as it should. i had 8.1
running with autovacuum for just 2 days or so and got
Not having found anything so far, does anyone know of, and can point me
to, either tools, or articles, that talk about doing tuning based on the
information that this sort of information can help with?
Marc G. Fournier Hub.Org Networking Services (http://www.hub.org)
Email:
David Lang [EMAIL PROTECTED] wrote
a few weeks ago I did a series of tests to compare different filesystems.
the test was for a different purpose so the particulars are not what I
woud do for testing aimed at postgres, but I think the data is relavent)
and I saw major differences between
Michael Riess [EMAIL PROTECTED] writes:
On 12/1/05, Michael Riess [EMAIL PROTECTED] wrote:
we are currently running a postgres server (upgraded to 8.1) which
has one large database with approx. 15,000 tables. Unfortunately
performance suffers from that, because the internal tables
(especially
Hi Michael,
I'm a fan of ReiserFS, and I can be wrong, but I believe using a
journaling filesystem for the PgSQL database could be slowing things
down.
Gavin
On Dec 1, 2005, at 6:51 AM, Michael Riess wrote:
Hi David,
with 15,000 tables you are talking about a LOT of files to hold
Im running postgresql 8.1.0 with postgis 1.0.4 on a
FC3 system, 3Ghz, 1 GB memory.
I am using COPY to fill a table that contains one postgis
geometry column.
With no geometry index, it takes about 45 seconds to COPY
one file.
If I add a geometry index, this time degrades. It
keeps
Am Donnerstag, den 01.12.2005, 10:07 -0800 schrieb Gavin M. Roy:
Hi Michael,
I'm a fan of ReiserFS, and I can be wrong, but I believe using a
journaling filesystem for the PgSQL database could be slowing things
down.
Have a 200G+ database, someone pulling the power plug
or a regular
Agreed. Also the odds of fs corruption or data loss are higher in a
non journaling fs. Best practice seems to be to use a journaling fs
but to put the fs log on dedicated spindles separate from the actual
fs or pg_xlog.
Ron
At 01:40 PM 12/1/2005, Tino Wildenhain wrote:
Am Donnerstag, den
Here's a fairly recent post on reiserfs (and performance):
http://archives.postgresql.org/pgsql-novice/2005-09/msg7.php
I'm still digging on performance of ext2 vrs journaled filesystems,
as I know I've seen it before.
Gavin
My point was not in doing an fsck, but rather in
On Dec 1,
Ron [EMAIL PROTECTED] writes:
Agreed. Also the odds of fs corruption or data loss are higher in a
non journaling fs. Best practice seems to be to use a journaling fs
but to put the fs log on dedicated spindles separate from the actual
fs or pg_xlog.
I think we've determined that best
I am importing roughly 15 million rows in one batch transaction. I am
currently doing this through batch inserts of around 500 at a time,
although I am looking at ways to do this via multiple (one-per-table)
copy commands for performance reasons.
I am currently running: PostgreSQL 8.0.4,
Heh looks like I left a trailing thought...
My post wasn't saying don't use journaled filesystems, but rather
that it can be slower than non-journaled filesystems, and I don't
consider recovery time from a crash to be a factor in determining the
speed of reads and writes on the data. That
Jeremy Haile [EMAIL PROTECTED] writes:
1) Why does the performance degrade as the table sizes grow? Shouldn't
the insert performance remain fairly constant if there are no indexes or
constraints?
Yeah, insert really should be a constant-time operation if there's no
add-on operations like
Michael Riess [EMAIL PROTECTED] writes:
On 12/1/05, Michael Riess [EMAIL PROTECTED] wrote:
we are currently running a postgres server (upgraded to 8.1) which
has one large database with approx. 15,000 tables. Unfortunately
performance suffers from that, because the internal tables
(especially
On 12/1/05, Michael Riess [EMAIL PROTECTED] wrote:
Michael Riess [EMAIL PROTECTED] writes:
On 12/1/05, Michael Riess [EMAIL PROTECTED] wrote:
we are currently running a postgres server (upgraded to 8.1) which
has one large database with approx. 15,000 tables. Unfortunately
performance
we are currently running a postgres server (upgraded to 8.1) which has
one large database with approx. 15,000 tables. Unfortunately
performance
suffers from that, because the internal tables (especially that which
holds the attribute info) get too large.
(We NEED that many tables, please
On Thu, 2005-12-01 at 13:34, Michael Riess wrote:
Michael Riess [EMAIL PROTECTED] writes:
On 12/1/05, Michael Riess [EMAIL PROTECTED] wrote:
we are currently running a postgres server (upgraded to 8.1) which
has one large database with approx. 15,000 tables. Unfortunately
performance
[EMAIL PROTECTED] wrote:
what i noticed is autovacuum not working properly as it should. i had 8.1
running with autovacuum for just 2 days or so and got warnings in pgadmin
that my tables would need an vacuum.
Hum, so how is autovacuum's documentation lacking? Please read it
critically and
As a follow up to my own question:
I reran the COPY both ways (with the index
and without) while running iostat. The following values
are averages:
%user %nice %sys %iowait %idle
no index
39 0
2.8 11 47
index
16 1.5 2.1
34 46
Im no performance guru, so please
indulge a couple of
Rick,
On 12/1/05 2:18 PM, Rick Schumeyer [EMAIL PROTECTED] wrote:
As a follow up to my own question:
I reran the COPY both ways (with the index and without) while running iostat.
The following values
are averages:
%user %nice %sys %iowait %idle
no index 39 0
Luke Lonergan [EMAIL PROTECTED] writes:
The problem I see is nicely shown by the increase in IOWAIT between the two
patterns (with and without index). It seems likely that the pattern is:
A - insert a tuple into the table
B - insert an entry into the index
C - fsync the WAL
- repeat
This
I only have one CPU. Is my copy of iostat confused, or does this have
something to do with hyperthreading or dual core? (AFAIK, I don't have a
dual core!)
The problem (for me) with dropping the index during a copy is that it takes
tens of minutes (or more) to recreate the geometry index once
So say I need 10,000 tables, but I can create tablespaces. Wouldn't that solve
the performance problem caused by Linux's (or ext2/3's) problems with large
directories?
For example, if each user creates (say) 10 tables, and I have 1000 users, I
could create 100 tablespaces, and assign groups
Tom,
That analysis is far too simplistic, because only the WAL
write has to happen before the transaction can commit. The
table and index writes will normally happen at some later
point in the bgwriter, and with any luck there will only need
to be one write per page, not per tuple.
Steve,
When we restore the postmaster process tries to use 100% of the CPU.
The questions we have are:
1) What is postmaster doing that it needs so much CPU?
Parsing mostly, and attribute conversion from text to DBMS native
formats.
2) How can we get our system to go faster?
Use
On Thu, 1 Dec 2005, Qingqing Zhou wrote:
David Lang [EMAIL PROTECTED] wrote
a few weeks ago I did a series of tests to compare different filesystems.
the test was for a different purpose so the particulars are not what I
woud do for testing aimed at postgres, but I think the data is relavent)
here are the suggestions from the MySQL folks, what additional tests
should I do.
I'd like to see some tests submitted that map out when not to use a
particular database engine, so if you have a test that you know a
particular database chokes on let me know (bonus credibility if you
include
On Fri, 2 Dec 2005, David Lang wrote:
I don't have all the numbers readily available (and I didn't do all the
tests on every filesystem), but I found that even with only 1000
files/directory ext3 had some problems, and if you enabled dir_hash some
functions would speed up, but writing lots
On Thu, 1 Dec 2005, Craig A. James wrote:
So say I need 10,000 tables, but I can create tablespaces. Wouldn't that
solve the performance problem caused by Linux's (or ext2/3's) problems with
large directories?
For example, if each user creates (say) 10 tables, and I have 1000 users, I
On Fri, 2 Dec 2005, Luke Lonergan wrote:
Steve,
When we restore the postmaster process tries to use 100% of the CPU.
The questions we have are:
1) What is postmaster doing that it needs so much CPU?
Parsing mostly, and attribute conversion from text to DBMS native
formats.
2) How can we
53 matches
Mail list logo