>
>
> > 
> > 
> > The --cachedir directory /home/s3qlcache has 283G of free storage, 
> > nevertheless it breaks. Is there a limit on the database size? 
>
> I don't think so. What's interesting is that the error happens during 
> the VACUUM operation, which actually shrinks the database size by 
> rewriting it completely. I believe it does so my writing into a 
> completely fresh file. 
>
 

>
> At this point fsck.s3ql has effectively completed, so if you run it 
> again it shouldn't pick up any errors. Is that correct? 
>

Only tried now, and yes, that's correct:

alx@zen:~$ sudo fsck.s3ql  --authfile "/myetc/s3ql/auth/s3ql_authinfo" 
--cachedir "/home/s3qlcache/"  gs://ideiao
[sudo] password for alx:
Requesting new access token
Starting fsck of gs://ideiao/
Using cached metadata.
File system is marked as clean. Use --force to force checking.



> Also, could you try out to open the db in the sqlite3 command line 
> utility and execute the VACUUM command there? 
>
> It gave the same error. To do the vacuuming, sqlite needs a temp dir, 
which is selected in the order explained here 
<https://sqlite.org/tempfiles.html#tempdir>;

In my case, the /var/tmp hadn't the needed storage available to do the 
vacuum, which lead to the error. I freed up some space and then the command 
finished without errors.

To avoid the issue, maybe it's a good idea to do PRAGMA 
temp_store_directory =--cachedir;


Thanks.
Alexandre

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to