I have a real time program which logs more than 30,000 records, each
record of about 200 bytes, per day and the company in which it has
been installed is working 24/365. I installed the project on 2005
August and it is working fine till date. It perform some report
generations (4 or 5) every day. The data is dumped to Sqlite database
and I don't know the current size of that database. But the PC is with
just 256 MB RAM and 160 GB hard disk and CPU is 800MHz. It is working
for last 3-4 years without shutting down and without any data crash or
program crash. It is so designed to VACUUM the database on 20th of
each month or the nearest sunday.
The database when I last checked was of size 4 GB and it may be
increased to 6 or 7 GB now.
I have modified the program to split the database on yearly basis long
before, but the company is not ready to accept the modification.

But still it works fine. Is that enough for you......



On 2/17/09, Alexey Pechnikov <pechni...@mobigroup.ru> wrote:
> Hello!
>
> В сообщении от Monday 16 February 2009 22:14:03 Jay A. Kreibich написал(а):
>> > Of cource, write operations must be grouped becouse memory allocation
>> > for write transaction is proportional to database size (see offsite).
>>
>>   This limitation was removed about a year ago around 3.5.7.  Rather than
>>   using a static bit-map for dirty pages, there is a bitvec class that
>>   implements a sparse array, removing most of the memory size
>> dependencies.
>
> It's very well! Thanks!
>
> Best regards, Alexey.
> _______________________________________________
> sqlite-users mailing list
> sqlite-users@sqlite.org
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>


-- 
Regards
Rajesh Nair
_______________________________________________
sqlite-users mailing list
sqlite-users@sqlite.org
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to