On Tue, 2007-09-25 at 00:53 -0400, Carlo Stonebanks wrote:
> My problem is that I think that SRF's are causing my problems. The SRF's
> gets an automatic row estimate of 1000 rows.
That's correct. That's what I see too though I may return 10K rows of
data. (min 10 columns)
But It's way faster than
My problem is that I think that SRF's are causing my problems. The SRF's
gets an automatic row estimate of 1000 rows. Add a condition to it, the
planner guesses 333 rows. Even at 333, this is an overestimate of the number
of rows returned.
I'm really disappointed - SRF's are a great way to place t
On Mon, 2007-09-24 at 14:12 -0400, Carlo Stonebanks wrote:
> Has anyone offered any answers to you? No one else has replied to this post.
Overestimate of selectivity. I guess it's mainly due to my one to many
table relationships. I've tried everything from concatenated join
columns and indexing it
"Carlo Stonebanks" <[EMAIL PROTECTED]> writes:
> Is there a rule of thumb about when the planner's row estimates are too high?
> In particular, when should I be concerned that planner's estimated number of
> rows estimated for a nested loop is off? By a factor of 10? 100? 1000?
Not really. It's a
Is there a rule of thumb about when the planner's row estimates are too
high? In particular, when should I be concerned that planner's estimated
number of rows estimated for a nested loop is off? By a factor of 10? 100?
1000?
Carlo
---(end of broadcast)-
Has anyone offered any answers to you? No one else has replied to this post.
"Ow Mun Heng" <[EMAIL PROTECTED]> wrote in message
news:[EMAIL PROTECTED]
On Thu, 2007-09-20 at 11:02 -0400, Carlo Stonebanks wrote:
(SORRY FOR THE REPOST, I DON'T SEE MY ORIGINAL QUESTION OR ANY ANSWERS
HERE)
I am
On Mon, 2007-09-24 at 16:04 +0200, Csaba Nagy wrote:
> On Mon, 2007-09-24 at 14:27 +0100, Simon Riggs wrote:
> > Csaba, please can you copy that data into fresh tables, re-ANALYZE and
> > then re-post the EXPLAINs, with stats data.
>
> Well, I can of course. I actually tried to generate some rando
On 24 sep 2007, at 17.21, Fabiola Fernández wrote:
I am trying to decide if it would be worth using LONGTEXT instead
of TEXT, as maybe it would slow down the data insertion and
extraction.
Postgres doesn't have a LONGTEXT datatype, so keep using TEXT.
http://www.postgresql.org/docs/8.2/int
> -Original Message-
> From: Gábor Farkas
>
>
> i see.
>
> will i achieve the same thing by simply dropping that table
> and re-creating it?
Yes. Or even easier (if you don't need the data anymore) you can use the
truncate command. Which deletes everything in the table including dead
On Mon, 2007-09-24 at 17:14 +0200, Gábor Farkas wrote:
> will i achieve the same thing by simply dropping that table and
> re-creating it?
If you have an index/PK on that table, the fastest and most useful way
to rebuild it is to do CLUSTER on that index. That will be a lot faster
than VACUUM FUL
On 9/24/07, Fabiola Fernández <[EMAIL PROTECTED]> wrote:
> I have a database with an amount of tables and in several of them I have an
> attribute for a semantic definition, for which I use a field of type text. I
> am trying to decide if it would be worth using LONGTEXT instead of TEXT, as
> maybe
In response to "Gábor Farkas" <[EMAIL PROTECTED]>:
> Heikki Linnakangas wrote:
> > Gábor Farkas wrote:
> >>
> >> if i do a full vacuum to that table only, will the database still serve
> >> data from the other tables at a normal speed?
> >
> > Yes. The extra I/O load vacuum full generates while i
Hello,
I have a database with an amount of tables and in several of them I have an
attribute for a semantic definition, for which I use a field of type text. I
am trying to decide if it would be worth using LONGTEXT instead of TEXT, as
maybe it would slow down the data insertion and extraction. I
Heikki Linnakangas wrote:
Gábor Farkas wrote:
if i do a full vacuum to that table only, will the database still serve
data from the other tables at a normal speed?
Yes. The extra I/O load vacuum full generates while it's running might
disrupt other activity, though.
i see.
will i achieve
Gábor Farkas wrote:
> hmm... can a full-vacuum be performed while the database is still "live"
> (i mean serving requests)?
>
> will the db still be able to respond to queries?
VACUUM FULL will exclusive lock the table, which means that other
queries accessing it will block and wait until it's fi
Heikki Linnakangas wrote:
Gábor Farkas wrote:
- but in the past, the cron-job was not in place, so the table's size
grew to around 80 entries (in around 80 days)
That bloated your table, so that there's still a lot of empty pages in
it. VACUUM FULL should bring it back to a reasonable size
Gábor Farkas wrote:
> - this table has a lot of updates and inserts, it works very similarly
> to a session-table for a web-application
Make sure you run VACUUM often enough.
> - there is a cron-job that deletes all the old entries, so it's size is
> rougly between 15000 and 35000 entries (it's r
On 9/24/07, Gábor Farkas <[EMAIL PROTECTED]> wrote:
>
>
> INFO: "main_activity": found 41001 removable, 11672 nonremovable row
> versions in 160888 pages
> DETAIL: 0 dead row versions cannot be removed yet.
> There were 14029978 unused item pointers.
> 0 pages are entirely empty.
> CPU 5.53s/1.71
I have found the reason!!! I begin to see line by line postgresql.conf and saw
ssl = true.
I have disabled ssl and then I have restarted the server and that's all. It's 4
or 5 times faster than the old server.
I don't understand why PgAdmin is connecting using ssl if I have leave this
field empt
hi,
i have the following table:
CREATE TABLE "main_activity" (
"id" serial NOT NULL PRIMARY KEY,
"user_id" integer NOT NULL,
"sessionid" varchar(128) NOT NULL,
"login" timestamp with time zone NOT NULL,
"activity" timestamp with time zone NOT NULL,
"logout" timestamp with
On Mon, 2007-09-24 at 14:27 +0100, Simon Riggs wrote:
> Csaba, please can you copy that data into fresh tables, re-ANALYZE and
> then re-post the EXPLAINs, with stats data.
Well, I can of course. I actually tried to generate some random data
with similar record count and relations between the tabl
Hi Greg this is my Bonnie result.
Version 1.03 --Sequential Output-- --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
MachineSize K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
insaubi 8G 2589
On Fri, 2007-09-21 at 19:30 -0400, Tom Lane wrote:
> Simon Riggs <[EMAIL PROTECTED]> writes:
> > That's not my perspective. If the LIMIT had been applied accurately to
> > the cost then the hashjoin would never even have been close to the
> > nested join in the first place.
>
> [ shrug... ] Your
On Mon, 2007-09-17 at 07:27 -0500, Decibel! wrote:
> Take a look at the stuff at http://decibel.org/~decibel/pervasive/, it'd
> hopefully provide a useful starting point.
A bit offtrack, but I was reading the articles and noticed this in the
bottom. Is this a typo or ...
Making PostreSQL perva
24 matches
Mail list logo