On 2016/05/04 2:35 PM, Rob Willett wrote:
> Dominque,
>
> We put together a quick C program to try out the C API a few weeks 
> ago, it worked but it was very slow, from memory not much different to 
> the sqlite command line backup system. We put it on the back burner as 
> it wasn?t anywhere near quick enough.

You do realize that the backup API restarts the backup once the database 
content changes, right? I'm sure at the rates you describe and update 
frequency, that backup would never finish. The backup API is quite fast 
if your destination file is on a not-too-slow drive, but you will have 
to stop the incoming data to allow it to finish.

As an aside - you need a better provider, but that said, and if it was 
me, I would get two sites up from two different providers, one live, one 
stand-by, both the cheap sort so costs stay minimal (usually two cheap 
ones are much cheaper than the next level beefy one). Feed all 
updates/inserts to both sites - one then is the backup of the other, not 
only data-wise, but also can easily be switched to by simple DNS 
redirect should the first site/provider go down for any reason.  The 
second site can easily be interfered with / copied from / backed up / 
whatever without affecting the service to the public.

I only do this with somewhat critical sites, but your use-case sounds 
like it might benefit from it. My second choice would be to simply stop 
operations at a best-case time-slot while the backup / copy completes.

Cheers,
Ryan



Reply via email to