Good point ;)

I´m a bit branded when it comes to joins since I once had a co-worker that
created
an mySQL join resulting in the cartesian product of two 1GB Tables ;)
But I guess that won´t happen this time, would it?

-----Original Message-----
From: Jay Sprenkle [mailto:[EMAIL PROTECTED] 
Sent: Monday, April 10, 2006 9:32 PM
To: sqlite-users@sqlite.org
Subject: Re: [sqlite] How to handle large amount of data?

On 4/10/06, André Goliath <[EMAIL PROTECTED]> wrote:
> Would you think the amount of memory needed would be acceptable for an
> field_data with
> 1.000.000 objects * 21 fields * 1.000 chars ?
> It is quite unlikely to reach that DB size,
> but I have to expect the unexpected ;)

It does not load the entire database, or the entire index, into memory.
I would bet you won't have a memory problem. Good indexing can make
even huge searches possible quickly. Look at google as an example ;)

Reply via email to