Hello.

We have an app on CE 5, and CE 7.

We are keen to move from using SQL CE, to Sqlite, for our main data, as
Sqlite is superior in many ways.

The app runs on a mobile device, where power can be removed at any time.
The data is stored on SD cards, with FAT filesystem.

Sometimes the IO sub-system is known to be slow.  As such, corruption and
data loss is our number one fear. Our main aim is to keep the IO to a
minimum, thus reducing the risk of corruption anywhere.  Corruption of the
file itself, but also the filesystem, leading to problems accessing the
file.

Of course this fear has nothing to do with Sqlite per se, but we're trying
to understand how the risk might *change*, as we move from SQL CE to
Sqlite.

With Sqlite, we're using Synchronous=FULL (as we don't want to lose data),
Journal Mode=Persist (to inhibit unnecessary IO on the journal file).

In SQL CE, the minimum interval for flushes to disk is 1 second, which is
what we've been using.

My question is (finally I get to it!) with these settings, it sounds like
if we do (say) 3 transactions in a particular second, we will end up doing:

 - 3 updates to the FAT to obtain the exclusive lock
 - 3 writes to the sqlite journal
 - 3 writes to the sqlite main db file
 - 3 updates to the FAT to release the exclusive lock, update the last
modified file date, etc.


 So a total of 12 writes to the file system. (excluding any reads etc
required in the actual transaction).

 Whereas in SQL CE this would have been about 1 (to just append the new
data for all 3, at the end of the flush interval. It presumably keeps the
file open, so no FAT updates.).

 a) Is my understanding roughly correct here in terms of the IO?
 b) If so, is there any way to reduce the IO effort?
 c) Can anyone comment on whether Windows CE VFS does actually honour the
fsync commands of sqlite, or does the OS perhaps buffer them anyway?

 Much thanks in advance for any pointers.
 Nik

Reply via email to