I've been doing a fair bit of research into optimising larger tables and just want a bit of feedback (most of my lecturers are giving conflicting answers).

Just say I have a table with 200+ fields (all relate uniquly to the primary key), when querying the first 10 fields of a table I get just as fast a response (about the same as if I were querying a table with the same number of rows but only 10 fields) however as I get further into the table (i.e. selecting fields 180..190 there seems to be a noticable decrease in speed (by noticable I mean around 750 benchmark queries per second slower)).

As a measure to increase the speed I was thinking that a viable option would be to break up the table based on how it is queried, such that sets of fields will be divided into new tables that contain fields relating to the query (so long as any fields that may be used in multiple queries (and require rejoining later) are not separated as this is much slower than keeping all fields in one table to begin with).

The main question is, is the method listed above the best way to improve the speed of a large table or should they all remain in the same table as splitting may cause other problems later on.

One method suggested by one of my lectures was to leave the table as one and use views, however after testing this out I found views slower than querying the table large table directly (and seeing that they don't increase the table size it leads me to believe that a view is simply an alias to a query).

Any feedback is appreciated,
James Austin.


Reply via email to