... and there was much rejoicing! :D

Is it thus safe to say that sqlite can handle 100+ GB of data, or are
other resources consumed exponentially as DB size goes up? (linear
growth I can understand and handle, exponential growth is a problem
for me)

Thanks,
Bjorn

2009/4/14 Dan <[email protected]>:
>
> On Apr 14, 2009, at 4:00 PM, Bjorn Toft Madsen wrote:
>
>> Hi,
>>
>> The docs for SQLite state:
>> - Situations Where Another RDBMS May Work Better
>>   - Very large datasets
>>      - [...] [sqlite] has to allocate a bitmap of dirty pages in the
>> disk file to help it manage its rollback journal. [...]
>>
>> Aside from a larger page size, what remedies are suggested for
>> removing (or minimizing) this requirement?
>
> That statement is out of date. Since it was written the bitmap has
> been replaced with a better data structure that does not require the
> big per-transaction allocation.
>
> Dan.
>
>
>> SQLite is more or less perfect for my needs, but I expect hundreds and
>> hundreds of gigabytes of data so this high memory requirement means I
>> have to possibly roll my own solution :(
>>
>> Thanks,
>> Bjorn
>> _______________________________________________
>> sqlite-users mailing list
>> [email protected]
>> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
> _______________________________________________
> sqlite-users mailing list
> [email protected]
> http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users
>
_______________________________________________
sqlite-users mailing list
[email protected]
http://sqlite.org:8080/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to