Hallo.

I have an application that uses a central database (Oracle or SQL
server) and creates a copy of the data on an SQLite database file.

This file contains a read only copy of more or less all the central
database tables and contains about 50 tables and may contain up to 100k
records spread between the different tables.

When application starts, it makes a request to a service that runs where
the central database is that creates the database file, creates the
tables, fills the tables with data and creates the necessary indexes.
The file is then compressed, sent to the client and deleted because it
is not used any more.

This operation is quite heavy and takes several seconds (20 sec on my
laptop, generating a 1700k data file)

Since the file is absolutely temporary, I installed a ramdisk driver and
I tried to create the file in this driver instead that in the hard disk.

The difference is really astonishing: 0.9 seconds instead of >20. This
means that I don' have to worry about performance any more.

There is only a problem: ramdisk memory sizing. The server may receive
multiple concurrent requests, so the ramdisk must be dimensioned
accordingly, wasting memory that is normally not used... 
This is simple, but causes administration problems and error checking
routines that I would like to avoid.

The question is:
Is there a way that allows to create the database in memory (this is
present: use :memory: as file name) and to use the allocated memory
before it is deallocated when database is closed?

I had a look at the sources but I did not understand how memory
allocation takes place. I imagine that the library is not using a
contiguous block of memory but a sort of list....

My program is written in .Net and the compression routines I'm using are
stream based, so I need to create a memory stream from the internal
buffers that can be used by the compression routine...

Bye, Michele

Reply via email to