You might considering partitioning the table dependent on one of the
column's values.  The canonical example is splitting a sales table by date
range.  This has the effect of decreasing load time and query time by making
the indexes smaller.  You would have to make a series of  rules though if
you want to be able to use the partitions transparently.  I am working with
a 36+ billion row table in Postgres that is composed of 12 3 billion row
partitions, and wouldn't really be able to use it effectively without a
partitioning scheme.

-Allen



On 5/8/06, Alex Brelsfoard <[EMAIL PROTECTED]> wrote:
>
> Howdy all,
>
>     I know this isn't specifically a Perl question.  But I AM using Perl
> with this project.
>
> Basically I am dealing with using, storing, and sorting a LOT of data in a
> mysql database.
> With all the data in the table it makes for 404.8 Million rows.  In a
> backup
> sql file that makes just under 80GB.
>
> I an using the InnoDB engine.
>
> I was just wondering if anyone else has had experience working with
> databases this large, and using MySQL.
> I've run into some smaller problems along the way due to the immensity of
> this table.
> In the end, to do what we want I will be creating a smaller table, with a
> subset of entries from the original.
> But the original needs to exist as well.
>
> I'm looking for heads up warning for things I should watch out for due to
> the size of this thing, or any suggestions on speedier sorting and
> querying.
>
> Thanks a lot.
>
> --Alex
>
> _______________________________________________
> Boston-pm mailing list
> [email protected]
> http://mail.pm.org/mailman/listinfo/boston-pm
>
 
_______________________________________________
Boston-pm mailing list
[email protected]
http://mail.pm.org/mailman/listinfo/boston-pm

Reply via email to