I'd appreciate some advice on how best to handle a biggish dataset
consisting of around 5 million lines. At the moment, I have a single
table consisting of four fields and one primary key:
partcode varchar(20)
region varchar(10)
location varchar(50)
qty int(11)
PRIMARY KEY (partcode, region, location)
The biggest variable is partcode, with around 80,000 distinct values.
For statistical purposes, I need to be able to select a sum(qty) based
on the other three fields (eg, "select sum(qty) from mytable where
partcode ='x' and region = 'y' and location = 'z'") as well as
generating a list of partcodes and total quantities in each region and
location (eg, "select sum(qty), partcode from mytable where region = 'y'
and location = 'z' group by partcode").
The selection is done via a web-based interface. Unfortunately, it's too
slow. So I want to be able to optimise it for faster access. Speed of
updating is less crucial, as it isn't updated in real-time - the table
gets updated by a nightly batch job that runs outside normal working
hours (and, apart from the rare occasion when a location is added or
removed, the only thing that changes is the value in qty).
Does anyone have any suggestions? My initial thought is to replace the
region and location varchar fields with int fields keyed to a separate
list of region and location names. Would that help, or is there a better
way?
Mark
--
http://mark.goodge.co.uk - my pointless blog
http://www.good-stuff.co.uk - my less pointless stuff
--
MySQL General Mailing List
For list archives: http://lists.mysql.com/mysql
To unsubscribe: http://lists.mysql.com/[EMAIL PROTECTED]