On 10/26/07, Heikki Linnakangas <[EMAIL PROTECTED]> wrote:
> Gokulakannan Somasundaram wrote:
> > As far as Load Test is concerned, i have tried to provide all the
> relevant
> > details. Please inform me, if i have left any.
> Thanks!
> How large were the tables?

It is in the Performance test report.  They contain 2 million records.  6
columns wide, 3 text and 3 numeric. same set of tables used for both tests,
after refresh from a file

Did you run all the queries concurrently? At this point, I think it'd be
> better to run them separately so that you can look at the impact on each
> kind of operation in isolation.
Performance tests are run against a workload and i have taken the workload
of a small scale partitioning setup. Running the queries individually has
already been done and the count of logical reads have been verified. I have
already suggested that. For some reason, i am not able to convince that for
simple index scans, Logical reads are a good measure of performance.

What kind of an I/O system does the server have?

Its a normal desktop system. The model no. is ST3400633A, 7200 RPM

It'd be interesting to get the cache hit/miss ratios, as well as the
> output of iostat (or similar) during the test. How much of the benefit
> is due to reduced random I/O?

Good suggestion. i have run the test against Windows. Let me try perfmon in
the next performance test, to monitor the performance test.

What does the numbers look like if the the tables are small enough to
> fit in RAM?

I don't know whether this is a valid production setup, against which we need
to benchmark. But if you insist, i will do that and get back to you next

You should do some tuning, the PostgreSQL default configuration is not
> tuned for maximum performance. At least increase checkpoint_segments and
> checkpoint_timeout and shared_buffers. Though I noticed that you're
> running on Windows; I don't think anyone's done any serious performance
> testing or tuning on Windows yet, so I'm not sure how you should tune
> that.

What we are trying to do here, is to try and compare the performance of two
indexing structures. AFAIK, the performance test done to compare two
software implementations should not have parameter settings, favorable to
one. I have not done any settings change favorable to thick index. But i
have a limited setup, from which i am trying to contribute. So please don't
ask me to run the tests against large scale servers.

 I think a better idea would be to form a Performance testing Workload mix (
Taking into account the QoS Parameters used in the normal database, purging
frequency, typical workload models used in the industry), with freedom in
hardware/software can be drawn. That might solve some of the Load test

CertoSQL Project,
Allied Solution Groups.

Reply via email to