Hi,

If you are using C++, then try hash_map. I've used this on strings with more that 50,000 records - in memory. Very fast. Much easier to program than BerkeleyDB.


----- Original Message ----- From: "Lloyd" <[EMAIL PROTECTED]>
To: <sqlite-users@sqlite.org>
Sent: Wednesday, April 11, 2007 11:20 PM
Subject: Re: [sqlite] Data structure


On Wed, 2007-04-11 at 10:00 -0500, P Kishor wrote:
I think, looking from Lloyd's email address, (s)he might be limited to
what CDAC, Trivandrum might be providing its users.

Lloyd, you already know what size your data sets are. Esp. if it
doesn't change, putting the entire dataset in RAM is the best option.
If you don't need SQL capabilities, you probably can just use
something like BerkeleyDB or DBD::Deep (if using Perl), and that will
be plenty fast. Of course, if it can't be done then it can't be done,
and you will have to recommend more RAM for the machines (the CPU
seems fast enough, just the memory may be a bottleneck).

Sorry, I am not talking about the limitations of the system in our side,
but end user who uses our software. I want the tool to be run at its
best on a low end machine also.

I don't want the capabilities of a data base here. Just want to store
data, search for presence, remove it when there is no more use of it.

Surely I will check out BerkeleyDB. The data set must be in ram, because
the total size of it is very small. (Few maga bytes) I just want to
spped up the search, which is done millions of times.

Thanks,

LLoyd


______________________________________
Scanned and protected by Email scanner

-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------





-----------------------------------------------------------------------------
To unsubscribe, send email to [EMAIL PROTECTED]
-----------------------------------------------------------------------------

Reply via email to