On Fri, 5 Apr 2019 15:45:10 +0300
Arthur Blondel <arthur5blon...@gmail.com> wrote:

> The data is always the same. That's why removing one row should be
> enough to insert a new one.
> My problem is that some times I need to remove many rows to add one
> new one.

SQLite *could* avoid that problem by pre-allocating space in the
journal sufficient to permit a single row to be deleted.  But it's not
obvious to me that the complexity is worth it, given the size of disks
these days and consequent rarity of the problem.  

If I were in your shoes, I'd consider maintaining a "dummy" file that's
expendable in the event of a SQLITE_FULL error.  

Compute how much space SQLite needs to delete a row.  Maybe double that
for safety's sake. Create a file that size, and fill it with deadbeef
just to be sure.  Write functions to create and delete that file,
because you'll want to do it consistently.  

When you encounter SQLITE_FULL, delete the file, do the deed, and
re-create the file.  If you can't recreate the file, you have an
unrecoverable error, but an intact database.  

It's not a perfect solution.  To guard against other processes seizing
the space while you're trying to use it, you'd have to wall off the
space, maybe with a loopback filesystem.  But it'd get you further down
the road than you are now.  

--jkl
_______________________________________________
sqlite-users mailing list
sqlite-users@mailinglists.sqlite.org
http://mailinglists.sqlite.org/cgi-bin/mailman/listinfo/sqlite-users

Reply via email to