Hello, I have PostgreSQL 8.0.3 running on a "workstation" with 768 MB
of RAM, under FreeBSD. And I have a 47-milion row table:
qnex=# explain select * from log;
QUERY PLAN
Which row do you want ? Do you want 'a row' at random ?
I presume you want the N latest rows ?
In that case you should use an ORDER BY on an indexed field, the serial
primary key will do nicely (ORDER BY id DESC) ; it's indexed so it will
use the index and it will fly.
Any
Dawid Kuroczko <[EMAIL PROTECTED]> writes:
> qnex=# EXPLAIN SELECT * FROM log NATURAL JOIN useragents LIMIT 1;
> Limit (cost=15912.20..15912.31 rows=1 width=272)
>-> Hash Join (cost=15912.20..5328368.96 rows=47044336 width=272)
> If I set enable_hashjoin=false:
> qnex=# EXPLAIN ANALYZE S
Hi all,
I am trying to do an update on a table but so far I can't seem to
come up with a usable index. After my last question/thread the user
'PFC' recommended I store whether a file was to be backed up as either
't'(rue), 'f'(alse) or 'i'(nherit) to speed up changing files and sub
director
Dawid Kuroczko wrote:
>work_mem = 102400
>...I tried tweaking cpu_*, work_mem, effective_cache and so on, but without
>any luck.
I'm hoping you didn't tweak it enough! I posted something similar
this a while ago, but haven't since got around to figuring out
a useful test case to send to the list
Line noise, sorry...
After posting I went back to reading the pgsql docs and saw the query:
SELECT am.amname AS index_method, opc.opcname AS opclass_name,
opr.oprname AS opclass_operator FROM pg_am am, pg_opclass opc, pg_amop
amop, pg_operator opr WHERE opc.opcamid = am.oid AND amop.amopcla
On 7/22/05, Tom Lane <[EMAIL PROTECTED]> wrote:
> Dawid Kuroczko <[EMAIL PROTECTED]> writes:
> > qnex=# EXPLAIN SELECT * FROM log NATURAL JOIN useragents LIMIT 1;
>
> > Limit (cost=15912.20..15912.31 rows=1 width=272)
> >-> Hash Join (cost=15912.20..5328368.96 rows=47044336 width=272)
>
>
I wrote:
> Dawid Kuroczko <[EMAIL PROTECTED]> writes:
>> qnex=# EXPLAIN SELECT * FROM log NATURAL JOIN useragents LIMIT 1;
>> Limit (cost=15912.20..15912.31 rows=1 width=272)
>> -> Hash Join (cost=15912.20..5328368.96 rows=47044336 width=272)
>> If I set enable_hashjoin=false:
>> qnex=# EXPLA
Here is the SCSI output:
Web Server
SCSI subsystem driver Revision: 1.00
megaraid: v1.18j (Release Date: Mon Jul 7 14:39:55 EDT 2003)
megaraid: found 0x1028:0x000f:idx 0:bus 4:slot 3:func 0
scsi0 : Found a MegaRAID controller at 0xf883f000, IRQ: 18
scsi0 : Enabling 64 bit support
megaraid: [412
On Fri, 2005-07-22 at 12:20 -0400, Tom Lane wrote:
> I think that this refutes the original scheme of using the same fuzz
> factor for both startup and total cost comparisons, and therefore
> propose the attached patch.
>
> Comments?
Looks good. I think it explains a few other wierd perf reports
Simon Riggs <[EMAIL PROTECTED]> writes:
> Looks good. I think it explains a few other wierd perf reports also.
Could be. I went back to look at Sam Mason's report about three weeks
ago, and it definitely seems to explain his issue. The "fuzzy cost
comparison" logic is new in 8.0 so it hasn't had
On a single spindle:
$ time dd if=/dev/zero of=bigfile bs=8k count=200
200+0 records in
200+0 records out
real2m8.569s
user0m0.725s
sys 0m19.633s
None of my drives are partitioned big enough for me to create 2x RAM
sized files on a single disk. I have 16MB RAM and only 3
On 7/22/05, Tom Lane <[EMAIL PROTECTED]> wrote:
> > This is quite strange. The nestloop plan definitely should be preferred
> > in the context of the LIMIT, considering that it has far lower estimated
> > cost. And it is preferred in simple tests for me.
>
> After a suitable period of contemplat
13 matches
Mail list logo