On Tue, Aug 9, 2011 at 11:26 PM, Trevyn Meyer <[email protected]> wrote:
> I have a client who expects to have some large tables.  10-100 GB of data.
>
> Do you have any suggestions how to handle those?  I suspect even if
> indexed you would have speed issues?
>
> I can image a rolling system, where tables are rolled, like log files or
> something?  Any suggestions on how to handle 20 tables, with lots of
> rows? Break them into smaller?


What problems have you identified as likely issues?  Indexes may make
all the difference in the world, depending on how they are used.

In cases where we've needed to keep table + index size smaller we've
used an approach where each table only stores a month of data.  In our
case we were fine with merging multiple months worth of data in code,
with the ability to have separate month tables on different servers.

Another thing I'll throw in, how was the estimate for 10-100GB done?
Was that for MyISAM or InnoDB tables?  What sort of indexing?  Several
factors come into play for how much space is actually used on disk
besides just the amount of original data you plan to store.  An extra
100 bytes per row that you left out of your calculation starts to add
up when you have 10s of millions of rows.


-- 
Joseph Scott
[email protected]
http://josephscott.org/

_______________________________________________

UPHPU mailing list
[email protected]
http://uphpu.org/mailman/listinfo/uphpu
IRC: #uphpu on irc.freenode.net

Reply via email to