only. We plan to improve it so that
it can run against PostgreSQL patches. To find more information about
STP, visit: http://www.osdl.org/stp/.
A sample OSDL-DBT3 test result report can be found at:
http://khack.osdl.org/stp/276912/
Your comments are welcome,
Regards,
Jenny
--
Jenny Zhang
Open
be interesting to
> see..:-)
>
>
>
Let me know if you have any suggestions about how to improve the test
kit (parameters, reported information, etc.), or how to make it more
useful to PG community.
Thanks,
--
Jenny Zhang
Open Source Development Lab Inc
12725 SW Millikan Way
Suite
Our hardware/software configuration:
kernel: 2.5.74
distro: RH7.2
pgsql: 7.3.3
CPUS: 8
MHz:700.217
model: Pentium III (Cascades)
memory: 829 kB
shmmax: 3705032704
We did several sets of runs(repeating runs with the same database
parameters) and have the following observation:
1. With
Thanks for your prompt reply.
On Thu, 2003-09-18 at 16:19, Matt Clark wrote:
> > We thought the large effective_cache_size should lead us to better
> > plans. But we found the opposite.
>
> Maybe it's inappropriate for little old me to jump in here, but the plan
> isn't usually that important com
I posted more results as you requested:
On Fri, 2003-09-19 at 08:08, Manfred Koizar wrote:
> On Thu, 18 Sep 2003 15:36:50 -0700, Jenny Zhang <[EMAIL PROTECTED]>
> wrote:
> >We thought the large effective_cache_size should lead us to better
> >plans. But we found the op
On Thu, 2003-09-18 at 20:20, Tom Lane wrote:
> Jenny Zhang <[EMAIL PROTECTED]> writes:
> > ... It seems to me that small
> > effective_cache_size favors the choice of nested loop joins (NLJ)
> > while the big effective_cache_size is in favor of merge joins (MJ).
>
On Fri, 2003-09-19 at 06:12, Greg Stark wrote:
> Tom Lane <[EMAIL PROTECTED]> writes:
>
> > I think this is a pipe dream. Variation in where the data gets laid
> > down on your disk drive would alone create more than that kind of delta.
> > I'm frankly amazed you could get repeatability within 2-
I am running TPC-H with scale factor of 1 on RedHat7.2 with the kernel
2.5.74. Q17 can always finish in about 7 seconds on my system. The
execution plan is:
Aggregate (cost=780402.43..780402.43
Filter: ((p_brand = 'Brand#11'::bpchar) AND
> (p_contai
> ner = 'SM PKG'::bpchar))
>SubPlan
> -> Aggregate (cost=256892.28..256892.28 rows=1
> width=11)
>-> Seq Scan on lineitem (cost=0.00..256
on shopping_cart (cost=0.00..5.01
rows=1 width=144) (actual time=0.22..0.37 rows=1 loops=1)
Index Cond: (sc_id = 260706::numeric)
Total runtime: 1.87 msec
(3 rows)
Is it true that using pl/pgsql increases the overhead that much?
TIA,
Jenny
--
Jenny Zhang
Open Source Development Lab
12725
Oops, I named the var name the same as the column name. Changing it to
something else solved the problem.
Thanks,
Jenny
On Tue, 2003-12-16 at 15:54, Stephan Szabo wrote:
> On Tue, 16 Dec 2003, Jenny Zhang wrote:
>
> > I have stored procedure written in pl/pgsql which takes abou
_id;
END;
'IMMUTABLE LANGUAGE 'plpgsql';
create index i_item_order on item (item_order(i_subject));
TIA,
--
Jenny Zhang
Open Source Development Lab
12725 SW Millikan Way, Suite 400
Beaverton, OR 97005
(503)626-2455 ext 31
---(end of broadcast)---
TIP 8: explain analyze is your friend
12 matches
Mail list logo