"Steven Flatt" <[EMAIL PROTECTED]> writes:
> I can't reproduce the problem that way either (or when using a server-side
> PLpgSQL function to do similar). It looks like you have to go through an
> ODBC connection, with the looping done on the client side. Each individual
> insert to the temp tab
On 12/19/06, Tom Lane <[EMAIL PROTECTED]> wrote:
I still can't reproduce this. Using 7.4 branch tip, I did
create temp table foo(f1 varchar);
create table nottemp(f1 varchar);
\timing
insert into foo select stringu1 from tenk1 limit 100; insert into nottemp
select * from foo; truncate foo;
ins
"Steven Flatt" <[EMAIL PROTECTED]> writes:
> Issue #1:
> (I'm assuming there's a reasonable explanation for this.) If I create a
> temp table with a single varchar column (or text column), do 100 inserts to
> that table, copy to a permanent table, truncate the temp table and repeat,
> the time re
Please ignore my post from earlier today. As strange as it sounds, changing
"CREATE TEMP TABLE ... AS" to "CREATE TEMP TABLE ... LIKE" appeared to fix
my performance problem because things errored out so quickly (and silently
in my test program). After checking the pgsql logs, it became clear to
"Steven Flatt" <[EMAIL PROTECTED]> writes:
> I can fix this problem by using CREATE TEMP TABLE ... LIKE instead of CREATE
> TEMP TABLE ... AS.
That seems ... um ... bizarre. Now are you able to put together a
self-contained test case? Seems like we could have two independent bugs
here: first, wh
I have an update on this.
The reason I couldn't reproduce this problem was because of the way I was
creating the temp table in my tests. I was using:
CREATE TEMP TABLE tmp (LIKE perm);
This did not observe performance degradation over time.
However, the way our application was creating this t
"Steven Flatt" <[EMAIL PROTECTED]> writes:
> I've been trying to reproduce the problem for days now :). I've done pretty
> much exactly what you describe below, but I can't reproduce the problem on
> any of our lab machines. Something is indeed special in this environment.
Yuck. You could try s
I've been trying to reproduce the problem for days now :). I've done pretty
much exactly what you describe below, but I can't reproduce the problem on
any of our lab machines. Something is indeed special in this environment.
Thanks for all your help,
Steve
On 12/15/06, Tom Lane <[EMAIL PROTE
"Steven Flatt" <[EMAIL PROTECTED]> writes:
> Are the removable rows in pg_class even an issue? So what if 5000-6000 dead
> tuples are generated every hour then vacuumed? Performance continues to
> steadily decline over a few days time. Memory usage does not appear to be
> bloating. Open file ha
Good question, and I agree with your point.
Are the removable rows in pg_class even an issue? So what if 5000-6000 dead
tuples are generated every hour then vacuumed? Performance continues to
steadily decline over a few days time. Memory usage does not appear to be
bloating. Open file handles
"Steven Flatt" <[EMAIL PROTECTED]> writes:
> Our application is such that there is a great deal of activity at the
> beginning of the hour and minimal activity near the end of the hour.
OK ...
> The truncate and re-fill process is done once per hour, at the end of the
> high-load cycle, so I doub
Our application is such that there is a great deal of activity at the
beginning of the hour and minimal activity near the end of the hour. Those
3 vacuums were done at (approximately) 30 minutes past, 40 minutes past, and
50 minutes past the hour, during low activity. Vacuums of pg_class look
li
"Steven Flatt" <[EMAIL PROTECTED]> writes:
> Here's the output of "VACUUM VERBOSE pg_class". I think it looks fine. I
> even did it three times in a row, each about 10 minutes apart, just to see
> what was changing:
Hm, look at the numbers of rows removed:
> INFO: "pg_class": found 5680 remova
Here's the output of "VACUUM VERBOSE pg_class". I think it looks fine. I
even did it three times in a row, each about 10 minutes apart, just to see
what was changing:
INFO: vacuuming "pg_catalog.pg_class"
INFO: index "pg_class_oid_index" now contains 3263 row versions in
"Steven Flatt" <[EMAIL PROTECTED]> writes:
> Regarding your other email -- interesting -- but we are vacuuming pg_class
> every hour. So I don't think the answer lies there...
That's good, but is the vacuum actually accomplishing anything? I'm
wondering if there's also a long-running transaction
Thanks for your replies.
Starting a fresh session (not restarting the postmaster) seems to be
sufficient to reset performance (and is an easy enough workaround). Still,
it would be nice to know the root cause of the problem.
The backend process does not seem to be bloating memory-wise (I'm usin
"Steven Flatt" <[EMAIL PROTECTED]> writes:
> Having said that, what kinds of things should I be looking for that could
> deteriorate/bloat over time? Ordinarily the culprit might be infrequent
> vacuuming or analyzing, but that wouldn't be corrected by a restart of
> Postgres. In our case, restar
"Steven Flatt" <[EMAIL PROTECTED]> writes:
> Having said that, what kinds of things should I be looking for that could
> deteriorate/bloat over time? Ordinarily the culprit might be infrequent
> vacuuming or analyzing, but that wouldn't be corrected by a restart of
> Postgres. In our case, restar
After running some further standalone tests using temp tables, I'm not
convinced the problem is specific to temp table usage. In fact it looks
like generic SQL activity degrades over time.
Having said that, what kinds of things should I be looking for that could
deteriorate/bloat over time? Ord
"Steven Flatt" <[EMAIL PROTECTED]> writes:
> Any idea where the bloat is happening? I believe that if we were dropping
> and re-creating the temp table over and over, that could result in pg_class
> bloat (among other catalog tables), but what is going wrong if we use the
> same table over and ove
On 12/13/06, Steven Flatt <[EMAIL PROTECTED]> wrote:
Hi,
Our application is using Postgres 7.4 and I'd like to understand the root
cause of this problem:
To speed up overall insert time, our application will write thousands of
rows, one by one, into a temp table
1. how frequently are you comm
Hi,
Our application is using Postgres 7.4 and I'd like to understand the root
cause of this problem:
To speed up overall insert time, our application will write thousands of
rows, one by one, into a temp table (same structure as a permanent table),
then do a bulk insert from the temp table to th
22 matches
Mail list logo