> It seems, that if I know the type and frequency of the queries a
> database will be seeing, I could split the database by hand over
> multiple disks and get better performance that I would with a RAID array
> with similar hardware.

Unlikely, but possible if you had radically different hardware for
different tables.

> Six large (3-7 Mrow) 'summary' tables, each being updated continuously
> by 5-20 processes with about 0.5 transactions/second/process.

Well you should get close to an order of magnitude better performance from
a RAID controller with write-back cache on those queries.

> Periodically (currently every two weeks), join queries are
> performed between one of the 'summary' tables(same one each time) and
> each of the other five.  Each join touches most rows of both tables,
> indexes aren't used.  Results are written into a separate group of
> 'inventory' tables (about 500 Krow each), one for each join.

The more disks the data is spread over the better (the RAID controller
will help here with striping).

> There are frequent (100-1000/day) queries of both the
> inventory and summary tables using the primary key -- always using the
> index and returning < 10 rows.

RAM is what you need, to cache the data and indexes, and then as much CPU
power as you can get.

> We're currently getting (barely) acceptable performance from a single
> 15k U160 SCSI disk, but db size and activity are growing quickly.
> I've got more disks and a battery-backed LSI card on order.

3 or more disks in a stripe set, with write back caching, will almost
certainly give a huge performance boost.  Try that first, and only if you
have issues should you think about futzing with symlinks etc.


---------------------------(end of broadcast)---------------------------
TIP 6: Have you searched our list archives?


Reply via email to