On 06.10.2019 22:02 Keith Medcalf wrote: > On Sunday, 6 October, 2019 13:03, Kadirk <kadirkaracel...@gmail.com> wrote: > >> We already have an application specific WAL file, sqlite updates + >> application data is in this WAL file. We are taking snapshot of sqlite + >> application data to the disk to truncate WAL file, then we can rebuild >> latest state whenever needed (after restart etc.) >> We are evaluating sqlite in memory because response time is critical. We >> target less than ~30 microseconds per query/update for sqlite itself >> (Insert or selects are like 256 bytes to 10 kb). I tried sqlite on disk >> but there were 50+ milliseconds hiccups which might be expected as file >> IO overhead is quite high. >> I expect there might be a way to take backup of sqlite in memory while >> updates are still being processed (as in on disk online backup). Maybe >> something like copy on write memory for that? >> Our data on sqlite is around 10 gb~, so using serialize interface doesn't >> look possible. If I'm correct, this interface will allocate continuous >> space for all data, then copy into it. This will lead out of memory >> issues + 10 gb copy latency. > I think you are barking up the wrong tree. Why do you not simply process the > updates against both databases (the in memory transient copy and the on disk > persistent one). > Well, as for copy-on-write. Do it like redis and fork() the process then persist the database in the forked process. Problem is if you are using threads...
Or use a redis+sqlite combination like https://github.com/RedBeardLab/rediSQL _______________________________________________ sqlite-users mailing list sqlite-users@mailinglists.sqlite.org http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users