Re: Most efficient way of handling a large dataset

2008-10-25 Thread Mark Goodge
Joerg Bruehe wrote: Hi Mark, all! Mark Goodge wrote: I'd appreciate some advice on how best to handle a biggish dataset consisting of around 5 million lines. At the moment, I have a single table consisting of four fields and one primary key: partcode varchar(20) region varchar(10) location va

Re: Most efficient way of handling a large dataset

2008-10-24 Thread Joerg Bruehe
Hi Mark, all! Mark Goodge wrote: > I'd appreciate some advice on how best to handle a biggish dataset > consisting of around 5 million lines. At the moment, I have a single > table consisting of four fields and one primary key: > > partcode varchar(20) > region varchar(10) > location varchar(50)

Re: Most efficient way of handling a large dataset

2008-10-24 Thread Brent Baisley
On Fri, Oct 24, 2008 at 6:59 AM, Mark Goodge <[EMAIL PROTECTED]> wrote: > I'd appreciate some advice on how best to handle a biggish dataset > consisting of around 5 million lines. At the moment, I have a single table > consisting of four fields and one primary key: > > partcode varchar(20) > regio

Re: Most efficient way of handling a large dataset

2008-10-24 Thread Jim Lyons
You might consider adding qty to the index and have so your queries would be satisfied with the index lookup alone, saving an extra step since the database won't then go access the data (just the one field, qty). You might also consider making all field non-null and, if you keep the fields as char

Most efficient way of handling a large dataset

2008-10-24 Thread Mark Goodge
I'd appreciate some advice on how best to handle a biggish dataset consisting of around 5 million lines. At the moment, I have a single table consisting of four fields and one primary key: partcode varchar(20) region varchar(10) location varchar(50) qty int(11) PRIMARY KEY (partcode, region, lo