Joerg Bruehe wrote:
Hi Mark, all!
Mark Goodge wrote:
I'd appreciate some advice on how best to handle a biggish dataset
consisting of around 5 million lines. At the moment, I have a single
table consisting of four fields and one primary key:
partcode varchar(20)
region varchar(10)
location va
Hi Mark, all!
Mark Goodge wrote:
> I'd appreciate some advice on how best to handle a biggish dataset
> consisting of around 5 million lines. At the moment, I have a single
> table consisting of four fields and one primary key:
>
> partcode varchar(20)
> region varchar(10)
> location varchar(50)
On Fri, Oct 24, 2008 at 6:59 AM, Mark Goodge <[EMAIL PROTECTED]> wrote:
> I'd appreciate some advice on how best to handle a biggish dataset
> consisting of around 5 million lines. At the moment, I have a single table
> consisting of four fields and one primary key:
>
> partcode varchar(20)
> regio
You might consider adding qty to the index and have so your queries would be
satisfied with the index lookup alone, saving an extra step since the
database won't then go access the data (just the one field, qty).
You might also consider making all field non-null and, if you keep the
fields as char
I'd appreciate some advice on how best to handle a biggish dataset
consisting of around 5 million lines. At the moment, I have a single
table consisting of four fields and one primary key:
partcode varchar(20)
region varchar(10)
location varchar(50)
qty int(11)
PRIMARY KEY (partcode, region, lo