Hi,
Some days ago, I did a forced fsck to a s3ql file system, and deleted all
the data from the cache folder.
During the process, I accidentally terminated the ssh session, and the
process was interrupted.
Now, I get this, when I attempt to the same:
sudo fsck.s3ql --authfile "/myetc/s3ql/auth/s3ql_authinfo" --cachedir
"/myetc/s3ql/cache/" gs://ideiao
[sudo] password for alx:
Requesting new access token
Starting fsck of gs://ideiao/
Backend reports that file system is still mounted elsewhere. Either
the file system has not been unmounted cleanly or the data has not yet
propagated through the backend. In the later case, waiting for a while
should fix the problem, in the former case you should try to run fsck
on the computer where the file system has been mounted most recently.
You may also continue and use whatever metadata is available in the
backend. However, in that case YOU MAY LOOSE ALL DATA THAT HAS BEEN
UPLOADED OR MODIFIED SINCE THE LAST SUCCESSFULL METADATA UPLOAD.
Moreover, files and directories that you have deleted since then MAY
REAPPEAR WITH SOME OF THEIR CONTENT LOST.
Enter "continue, I know what I am doing" to use the outdated data anyway:
continue, I know what I am doing
Downloading and decompressing metadata...
Reading metadata...
..objects..
..blocks..
..inodes..
..inode_blocks..
..symlink_targets..
..names..
..contents..
> ERROR: Uncaught top-level exception:
Traceback (most recent call last):
File "src/s3ql/deltadump.pyx", line 555, in s3ql.deltadump.load_table
(src/s3ql/deltadump.c:6976)
File "src/s3ql/deltadump.pyx", line 186, in
s3ql.deltadump.SQLITE_CHECK_RC (src/s3ql/deltadump.c:1820)
apsw.FullError: database or disk is full
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/python3.4/contextlib.py", line 321, in __exit__
if cb(*exc_details):
File "/usr/lib/python3.4/contextlib.py", line 267, in _exit_wrapper
callback(*args, **kwds)
File "src/s3ql/deltadump.pyx", line 504, in
s3ql.deltadump.load_table.lambda15 (src/s3ql/deltadump.c:5295)
File "src/s3ql/deltadump.pyx", line 186, in
s3ql.deltadump.SQLITE_CHECK_RC (src/s3ql/deltadump.c:1820)
apsw.SQLError: cannot commit - no transaction is active
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/local/bin/fsck.s3ql", line 11, in <module>
load_entry_point('s3ql==2.22', 'console_scripts', 'fsck.s3ql')()
File
"/usr/local/lib/python3.4/dist-packages/s3ql-2.22-py3.4-linux-x86_64.egg/s3ql/fsck.py",
line 1257, in main
db = download_metadata(backend, cachepath + '.db')
File
"/usr/local/lib/python3.4/dist-packages/s3ql-2.22-py3.4-linux-x86_64.egg/s3ql/metadata.py",
line 304, in download_metadata
return restore_metadata(tmpfh, db_file)
File
"/usr/local/lib/python3.4/dist-packages/s3ql-2.22-py3.4-linux-x86_64.egg/s3ql/metadata.py",
line 97, in restore_metadata
load_table(table, columns, db=db, fh=fh)
File "src/s3ql/deltadump.pyx", line 434, in s3ql.deltadump.load_table
(src/s3ql/deltadump.c:7046)
File "/usr/lib/python3.4/contextlib.py", line 336, in __exit__
raise exc_details[1]
File "/usr/lib/python3.4/contextlib.py", line 321, in __exit__
if cb(*exc_details):
File "/usr/lib/python3.4/contextlib.py", line 267, in _exit_wrapper
callback(*args, **kwds)
File "src/s3ql/deltadump.pyx", line 494, in
s3ql.deltadump.load_table.lambda13 (src/s3ql/deltadump.c:5179)
File "src/s3ql/deltadump.pyx", line 186, in
s3ql.deltadump.SQLITE_CHECK_RC (src/s3ql/deltadump.c:1820)
apsw.SQLError: cannot commit - no transaction is active
There is plenty of storage available. Can you help me to solve this?
Thanks.
--
You received this message because you are subscribed to the Google Groups
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.