> -----Original Message-----
> From: Daniel Franke [mailto:[EMAIL PROTECTED]
> Sent: Thursday, March 16, 2006 9:32 AM
> To: sqlite-users@sqlite.org
> Subject: [sqlite] Re: concers about database size
>
>
> > > But now, there's another thing.I figured out how large my database
> > > will become and I'm scared of its size: up to 20GB and
> more! A single
> > > table, 4 columns, each holding an integer (32 bit) will have
> > > approximately 750 million rows. This mounts up to ~11GB. Adding an
> > > unique two-column index, I get another 10GB worth of
> data, that's an
> > > awful lot.
> >
> > Do you really need 10 gig of data in the same database?
> > At the Sprint data warehouse they kept really large amounts of data
> > (call records for all the cell phone usage), but in a separate
> > file/database for each billing cycle
>
> The original idea was to get rid of thousands of files to
> store their data
> in one single container. Those (ASCII) files add up to approx 5GB ...
>
> > If so, are you trying to use a blender to stir the ocean?
> > You might reevaluate if you're using the right tool for the job.
>
> That's my question: IS sqlite the right tool here? =)

And I believe he is asking, "Is this the right problem, here." :-)

Does sound like an awful lot of data.  I think the question might be
reworded to ask is there any manageable logical groups of data that
might lend themselves to simple segmentation into separate
tables/databases?

Commercial databases often range into terabytes but I doubt many
accomplish those numbers with a single table.  None that I have worked
on ever have, including "never ever forget", (spool or delete) SAP:-)

Fred


Reply via email to