"Simon Riggs" <[EMAIL PROTECTED]> writes:
> On Fri, 2006-12-15 at 09:56 -0500, Tom Lane wrote:
>> The fundamental problem with it was the assumption that different
>> executions of a plan node will have the same timing. That's not true,
>> in fact not even approximately true.
> It doesn't make s
Please ignore my post from earlier today. As strange as it sounds, changing
"CREATE TEMP TABLE ... AS" to "CREATE TEMP TABLE ... LIKE" appeared to fix
my performance problem because things errored out so quickly (and silently
in my test program). After checking the pgsql logs, it became clear to
On Fri, 2006-12-15 at 10:57 -0500, Tom Lane wrote:
> Peter Eisentraut <[EMAIL PROTECTED]> writes:
> > Am Freitag, 15. Dezember 2006 11:28 schrieb Simon Riggs:
> >> Until we work out a better solution we can fix this in two ways:
> >>
> >> 1. EXPLAIN ANALYZE [ [ WITH | WITHOUT ] TIME STATISTICS ] .
In response to Stephen Frost <[EMAIL PROTECTED]>:
> * Bill Moran ([EMAIL PROTECTED]) wrote:
> > What I'm fuzzy on is how to discretely know when I'm overflowing
> > work_mem? Obviously, if work_mem is exhausted by a particular
> > query, temp files will be created and performance will begin to su
In response to Tom Lane <[EMAIL PROTECTED]>:
> Bill Moran <[EMAIL PROTECTED]> writes:
> > Does the creation of a temp file trigger any logging?
>
> No; but it wouldn't be hard to add some if you wanted. I'd do it at
> deletion, not creation, so you could log the size the file reached.
> See File
"Steven Flatt" <[EMAIL PROTECTED]> writes:
> I can fix this problem by using CREATE TEMP TABLE ... LIKE instead of CREATE
> TEMP TABLE ... AS.
That seems ... um ... bizarre. Now are you able to put together a
self-contained test case? Seems like we could have two independent bugs
here: first, wh
I have an update on this.
The reason I couldn't reproduce this problem was because of the way I was
creating the temp table in my tests. I was using:
CREATE TEMP TABLE tmp (LIKE perm);
This did not observe performance degradation over time.
However, the way our application was creating this t
On Sat, 16 Dec 2006, Arjen van der Meijden wrote:
On 16-12-2006 4:24 Jeff Frost wrote:
We can add more RAM and drives for testing purposes. Can someone suggest
what benchmarks with what settings would be desirable to see how this
system performs. I don't believe I've seen any postgres benchm
"Harry Hehl" <[EMAIL PROTECTED]> writes:
> Why does vacuum full and reindex make a difference if the 3 tables are
> never updated or records deleted?
Probably because you did an ANALYZE somewhere and updated the planner's
stats. I think your major problem is poor estimation of the ds_tables
resul
I have a database that has 3 tables with a relatively small number of
records in each. (see schema/counts below). These 3 tables are loaded
when the db is created and there are never any updates or deletes on the
3 tables. This database does have many other tables.
ds_tables 132 rows, ds_types
"Sabin Coanda" <[EMAIL PROTECTED]> writes:
> A vacuum full command logs the message:
> ... LOG: transaction ID wrap limit is 1073822617, limited by database "A"
> Sometimes ago, the vacuum full logged:
> ... LOG: transaction ID wrap limit is 2147484148, limited by database "A"
> What causes tha
Sorry for the delay in responding. I had familial obligations.
As a matter of fact, I am spending a decent amount of time on
this. I don't usually pore through documentation for compilers and
OS's to the degree I've been since this thread started. Nor do I
usually try and get access to the
Hi all,
A vacuum full command logs the message:
... LOG: transaction ID wrap limit is 1073822617, limited by database "A"
Sometimes ago, the vacuum full logged:
... LOG: transaction ID wrap limit is 2147484148, limited by database "A"
What causes that difference of the limit ? Should I set or
13 matches
Mail list logo