I encountered this problem on one particular bucket and cannot seem to recover the s3ql filesystem. This system had been running for some time with s3ql-1.17 just fine, but now is stuck. I'm hoping that you can tell me how to use the backup metadata to get it going again.
(attached fsck-debug.log)
# fsck.s3ql --debug all --authfile /tmp/s3qlauth2.tmp gs://cust-backup01
Starting fsck of gs://las-backup01 Backend reports that file system is
still mounted elsewhere. Either the file system has not been unmounted
cleanly or the data has not yet propagated through the backend. In the
later case, waiting for a while should fix the problem, in the former
case you should try to run fsck on the computer where the file system
has been mounted most recently.
Enter "continue" to use the outdated data anyway:
Downloading and decompressing metadata...
Reading metadata...
..objects..
Exception during cleanup:
Traceback (most recent call last):
File
"/usr/lib64/python2.7/site-packages/s3ql-1.17-py2.7-linux-x86_64.egg/s3ql/cleanup_manager.py",
line 33, in _next_callback
callback(*args, **kwargs)
File "deltadump.pyx", line 455, in
s3ql.deltadump.load_table.lambda12 (src/s3ql/deltadump.c:4473)
File "deltadump.pyx", line 220, in s3ql.deltadump.SQLITE_CHECK_RC
(src/s3ql/deltadump.c:2166)
ConstraintError: PRIMARY KEY must be unique Uncaught top-level exception:
Traceback (most recent call last):
File "/usr/bin/fsck.s3ql", line 9, in <module>
load_entry_point('s3ql==1.17', 'console_scripts', 'fsck.s3ql')()
File
"/usr/lib64/python2.7/site-packages/s3ql-1.17-py2.7-linux-x86_64.egg/s3ql/fsck.py",
line 1177, in main
db = restore_metadata(tmpfh, cachepath + '.db')
File
"/usr/lib64/python2.7/site-packages/s3ql-1.17-py2.7-linux-x86_64.egg/s3ql/metadata.py",
line 95, in restore_metadata
load_table(table, columns, db=db, fh=fh)
File "deltadump.pyx", line 411, in s3ql.deltadump.load_table
(src/s3ql/deltadump.c:6001)
File "deltadump.pyx", line 518, in s3ql.deltadump.load_table
(src/s3ql/deltadump.c:5884)
File "deltadump.pyx", line 220, in s3ql.deltadump.SQLITE_CHECK_RC
(src/s3ql/deltadump.c:2166)
ConstraintError: PRIMARY KEY must be unique
I saw the Debian problem report #771452 with this issue and there
were two possible resolutions offered:
- invoke fsck without ssl
- apply the patch to s3c.py
Neither of these seems to work in this case. So it may be that there
is something corrupt in the metadata.
The difficult part is that this bucket was formatted with s3ql-1.17 on
CentOS6, and backporting s3ql-2.x to CentOS6 and then migrating the
filesystem to 2.x is a big task.
I did notice that s3qladm download-metadata failed (with the same
primary key error) for no 0, but succeeds for no 1 below. I’m
thinking that perhaps I can upload the s3ql_metadata_bak_0 instead of
s3ql_metadata_bak, but if so, I need to know the procedure for that.
# s3qladm --authfile /tmp/s3qlauth2.tmp download-metadata gs://cust-backup01
The following backups are available:
No Name Date
0 s3ql_metadata 2016-07-06 12:58:38
1 s3ql_metadata_bak_0 2016-07-06 12:41:59
2 s3ql_metadata_bak_1 2016-07-06 08:52:28
3 s3ql_metadata_bak_10 2016-07-02 09:16:45
4 s3ql_metadata_bak_2 2016-07-06 08:51:56
5 s3ql_metadata_bak_3 2016-07-05 07:12:08
6 s3ql_metadata_bak_4 2016-07-05 07:11:54
7 s3ql_metadata_bak_5 2016-07-04 06:50:46
8 s3ql_metadata_bak_6 2016-07-04 06:50:36
9 s3ql_metadata_bak_7 2016-07-03 07:14:33
10 s3ql_metadata_bak_8 2016-07-03 07:14:23
11 s3ql_metadata_bak_9 2016-07-02 09:16:55
Enter no to download: 1
Downloading and decompressing s3ql_metadata_bak_0...
Reading metadata...
..objects..
..blocks..
..inodes..
..inode_blocks..
..symlink_targets..
..names..
..contents..
..ext_attributes..
Andy Cress
--
You received this message because you are subscribed to the Google Groups
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.
fsck-debug.log
Description: Binary data
