Hi,

Thanks for the reply.


> >> You need storage in the --cachedir directory.
> >>
> >
> > I'm not sure what you mean by this. I didn't mention before, but I 
> removed
> > all the contents from the --cachedir to see if it worked.
>
> S3QL is trying to write data into the directory that you specify with 
> --cachedir (defaults to ~/.s3ql/), and getting an out of space 
> error. You need to provide more space in the filesystem that contains 
> this directory. 
>

I moved the --cachedir to another filesystem.

alx@zen:/home/s3qlcache$ sudo fsck.s3ql  --authfile 
"/myetc/s3ql/auth/s3ql_authinfo" --cachedir "/home/s3qlcache/"  gs://ideiao
Requesting new access token
Starting fsck of gs://ideiao/
Backend reports that file system is still mounted elsewhere. Either
the file system has not been unmounted cleanly or the data has not yet
propagated through the backend. In the later case, waiting for a while
should fix the problem, in the former case you should try to run fsck
on the computer where the file system has been mounted most recently.
You may also continue and use whatever metadata is available in the
backend. However, in that case YOU MAY LOOSE ALL DATA THAT HAS BEEN
UPLOADED OR MODIFIED SINCE THE LAST SUCCESSFULL METADATA UPLOAD.
Moreover, files and directories that you have deleted since then MAY
REAPPEAR WITH SOME OF THEIR CONTENT LOST.
Enter "continue, I know what I am doing" to use the outdated data anyway:
continue, I know what I am doing
Downloading and decompressing metadata...
Reading metadata...
..objects..
..blocks..
..inodes..
..inode_blocks..
..symlink_targets..
..names..
..contents..
..ext_attributes..
Creating temporary extra indices...
Checking lost+found...
Checking cached objects...
Checking names (refcounts)...
Checking contents (names)...
Checking contents (inodes)...
Checking contents (parent inodes)...
Checking objects (reference counts)...
Checking objects (backend)...
Requesting new access token
..processed 321000 objects so far..WARNING: Deleted spurious object 330154
WARNING: Deleted spurious object 330155
WARNING: Deleted spurious object 330156
WARNING: object 330145 only exists in table but not in backend, deleting
WARNING: File may lack data, moved to /lost+found: 
/homes/hourly.0/home/alx/.s3ql/mount.log
WARNING: object 330146 only exists in table but not in backend, deleting
WARNING: File may lack data, moved to /lost+found: 
/homes/hourly.0/home/samba/shares/dev/WORK_BKP/PCALX/sync.ffs_lock

Checking objects (sizes)...
Checking blocks (referenced objects)...
Checking blocks (refcounts)...



Checking blocks (checksums)...
Checking inode-block mapping (blocks)...
Checking inode-block mapping (inodes)...
Checking inodes (refcounts)...
WARNING: Inode 21905216 
(/homes/.sync/home/backups/ideiao.com/weekly.0/backup/site/html/joomla3/tmp/install_5433e6d66c7f1/html/com_easyblt,
 
setting from 15 to 14
[........... lots of warnings as above...........]
Checking inodes (sizes)...
Checking extended attributes (names)...
Checking extended attributes (inodes)...
Checking symlinks (inodes)...
Checking directory reachability...
Checking unix conventions...
Checking referential integrity...
Dropping temporary indices...
Dumping metadata...
..objects..
..blocks..
..inodes..
..inode_blocks..
..symlink_targets..
..names..
..contents..
..ext_attributes..
Compressing and uploading metadata...
Requesting new access token
Wrote 543 MiB of compressed metadata.
Cycling metadata backups...
Backing up old metadata...
Cleaning up local metadata...
ERROR: Uncaught top-level exception:
Traceback (most recent call last):
  File "/usr/local/bin/fsck.s3ql", line 11, in <module>
    load_entry_point('s3ql==2.22', 'console_scripts', 'fsck.s3ql')()
  File 
"/usr/local/lib/python3.4/dist-packages/s3ql-2.22-py3.4-linux-x86_64.egg/s3ql/fsck.py",
 
line 1293, in main
    db.execute('VACUUM')
  File 
"/usr/local/lib/python3.4/dist-packages/s3ql-2.22-py3.4-linux-x86_64.egg/s3ql/database.py",
 
line 98, in execute
    self.conn.cursor().execute(*a, **kw)
  File "src/cursor.c", line 236, in resetcursor
apsw.FullError: FullError: database or disk is full



alx@zen:/home/s3qlcache$ df -h
Filesystem                 Size  Used Avail Use% Mounted on
[...]
/dev/mapper/vg_data-homes  2.7T  2.3T  268G  90% /home
[...]


The --cachedir directory /home/s3qlcache has 283G of free storage, 
nevertheless it breaks. Is there a limit on the database size?

To test the directory, I duplicated the db file, just to be sure that the 
problem is not local storage:



alx@zen:/home/s3qlcache$ ls -lah
total 15G
drwxr-xr-x  2 root root 4.0K Apr  7 04:22 .
drwxr-xr-x 12 2002 5001 4.0K Apr  7 03:25 ..
-rw-------  1 root root  15G Apr  7 07:27 gs:=2F=2Fideiao=2F.db
-rw-r--r--  1 root root  202 Apr  7 07:25 gs:=2F=2Fideiao=2F.params


alx@zen:/home/s3qlcache$ sudo cp gs\:\=2F\=2Fideiao\=2F.db 
gs\:\=2F\=2Fideiao\=2F.db.test


alx@zen:/home/s3qlcache$ ls -lah
total 30G
drwxr-xr-x  2 root root 4.0K Apr  7 11:59 .
drwxr-xr-x 12 2002 5001 4.0K Apr  7 03:25 ..
-rw-------  1 root root  15G Apr  7 07:27 gs:=2F=2Fideiao=2F.db
-rw-------  1 root root  15G Apr  7 12:03 gs:=2F=2Fideiao=2F.db.test
-rw-r--r--  1 root root  202 Apr  7 07:25 gs:=2F=2Fideiao=2F.params


alx@zen:/home/s3qlcache$ df -h
Filesystem                 Size  Used Avail Use% Mounted on
/dev/mapper/vg_data-homes  2.7T  2.3T  253G  91% /home



What can I do next?

Thanks,

Alexandre

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to