Thanks all for your feedback.

I think I should explain more about how to use this test kit. 

The main purpose of putting the test kit on Scalability Test
Platform(STP) is that testers can run the workload against the database
with different parameters and Linux kernels to see performance
differences.  Though the test kit picks up default parameters if they
are not provided, the command line parameters overwrite the default
ones.  Currently, the following parameters are supported:
-s <scale_factor> -n <number of streams> -d '<database parameters>' -r
<{0|1}> -x <{0|1}> 

-s <scale_factor> is tpc-h database scale factor, right now, only SF=1
is available.

-n <number of streams> is the number of throughput test streams, which
corresponds number of simultaneous database connections during
throughput test.

-d <database parameters> is the database parameters used when starting
postmaster.  for example: 
-B 120000 -c effective_cache_size=393216 -c sort_mem=524288 -c
stats_command_string=true -c stats_row_level=true -c

-r {0|1}: indicates if the database dir base/<database dir>/pgsql_tmp is
put on a separate disk drive

-x {0|1}: indicates if the WAL is put on a separate disk drive.

The other comments are in-lined:

On Mon, 2003-08-04 at 06:33, Manfred Koizar wrote:
> | effective_cache_size           | 1000
> With 4GB of memory this is definitely too low and *can* (note that I
> don't say *must*) lead the planner to wrong decisions.
I changed the default to effective_cache_size=393216 as calculated by
Scott.  Another way to check the execution plan is to go to the results
There is a 'power_plan.out' file to record the execution plan.  I am
running a test with the changed effective_cache_size, I will see how it
affect the plan.

> | shared_buffers                 | 15200
> ... looks reasonable.  Did you test with other values?
I have only one with shared_buffers=1200000 at:
The performance degraded.  
> | sort_mem                       | 524288
> This is a bit high, IMHO, but might be ok given that DBT3 is not run
> with many concurrent sessions (right?).
> shows
> some swapping activity towards the end of the run which could be
> caused by a too high sort_mem setting.
Right, I run only 4 streams.  Setting this parameter lower caused more
reading/writing to the pgsql/tmp.  I guess the database has to do it if
it can not do sorting in memory. 

On 4 Aug 2003 at 15:33, Manfred Koizar wrote:
> I could not get postgresql .conf so I will combine the comments.
It is under database monitor data: database parameters
> 1. Effective cache size, already mentioned
> 2. Sort memory already mentioned.
> 3. Was WAL put on different drive?
That run did not put WAL on different drive.  I changed it this morning
so that it is configurable.  Also I changed the result page so that the
testers can tell from the result page.
> 4. Can you try with autovacuum daemon and 7.4beta when it comes out..
I'd be happy to run it.  We would like to improve out Patch Life
Management(PLM) system so that it can accept PG patches and run
performance tests on those patches.  Right now PLM only manages Linux
Kernel patches.  I would like to ask the PostgreSQL community if this
kind of tools is of interest.
> 5. What was the file system? Ext2/Ext3/reiser/XFS?
> <Scratching head>
It is Ext2.  Yeah, it is not reported on the page.
> Is there any comparison available for other databases.. Could be interesting to 
> see..:-)
> </Scratching head>

Let me know if you have any suggestions about how to improve the test
kit (parameters, reported information, etc.), or how to make it more
useful to PG community.

Jenny Zhang
Open Source Development Lab Inc 
12725 SW Millikan Way
Suite 400
Beaverton, OR 97005
(503)626-2455 ext 31

---------------------------(end of broadcast)---------------------------
TIP 4: Don't 'kill -9' the postmaster

Reply via email to