Hi
I have run up against another problem with one of the buckets I created at
Advania -- the fsck doesn't run without an error:
fsck.s3ql --backend-options="dumb-copy" s3c://s.qstack.advania.com:443/crin2
Starting fsck of s3c://s.qstack.advania.com:443/crin2/
Using cached metadata.
Remote metadata is outdated.
Checking DB integrity...
Creating temporary extra indices...
Checking lost+found...
Checking cached objects...
Checking names (refcounts)...
Checking contents (names)...
Checking contents (inodes)...
Checking contents (parent inodes)...
Checking objects (reference counts)...
Checking objects (backend)...
..processed 19000 objects so far..object 6236 only exists in table but not in
backend, deleting
File may lack data, moved to /lost+found:
b'/lost+found/lost+found_lost+found__2015-07-23________14:31:29____var____www____prod____docroot____owncloud________old____3rdparty____symfony____routing____Symfony____Component____Routing____Router.php'
Dropping temporary indices...
Uncaught top-level exception:
Traceback (most recent call last):
File "/usr/bin/fsck.s3ql", line 9, in <module>
load_entry_point('s3ql==2.13', 'console_scripts', 'fsck.s3ql')()
File "/usr/lib/s3ql/s3ql/fsck.py", line 1272, in main
fsck.check()
File "/usr/lib/s3ql/s3ql/fsck.py", line 90, in check
self.check_objects_id()
File "/usr/lib/s3ql/s3ql/fsck.py", line 951, in check_objects_id
(_, newname) = self.resolve_free(b"/lost+found", escape(path))
File "/usr/lib/s3ql/s3ql/fsck.py", line 1004, in resolve_free
name += b'-'
TypeError: Can't convert 'bytes' object to str implicitly
I found this report of the same issue:
-
https://bitbucket.org/nikratio/s3ql/issues/137/s3qlfsck-fails-with-typeerror-cant-convert
I applied the patch from here:
- https://bitbucket.org/nikratio/s3ql/commits/809d457684c8
And I now have the same issue as in this comment:
-
https://bitbucket.org/nikratio/s3ql/issues/137/s3qlfsck-fails-with-typeerror-cant-convert#comment-18402987
fsck.s3ql --backend-options="dumb-copy" s3c://s.qstack.advania.com:443/crin2
Starting fsck of s3c://s.qstack.advania.com:443/crin2/
Using cached metadata.
Remote metadata is outdated.
Checking DB integrity...
Creating temporary extra indices...
Checking lost+found...
Checking cached objects...
Checking names (refcounts)...
Checking contents (names)...
Checking contents (inodes)...
Checking contents (parent inodes)...
Checking objects (reference counts)...
Checking objects (backend)...
..processed 18000 objects so far..object 6236 only exists in table but not in
backend, deleting
File may lack data, moved to /lost+found:
b'/lost+found/lost+found_lost+found__2015-07-23________14:31:29____var____www____prod____docroot____owncloud________old____3rdparty____symfony____routing____Symfony____Component____Routing____Router.php'
Dropping temporary indices...
Uncaught top-level exception:
Traceback (most recent call last):
File "/usr/bin/fsck.s3ql", line 9, in <module>
load_entry_point('s3ql==2.13', 'console_scripts', 'fsck.s3ql')()
File "/usr/lib/s3ql/s3ql/fsck.py", line 1274, in main
fsck.check()
File "/usr/lib/s3ql/s3ql/fsck.py", line 90, in check
self.check_objects_id()
File "/usr/lib/s3ql/s3ql/fsck.py", line 951, in check_objects_id
(_, newname) = self.resolve_free(b"/lost+found", escape(path))
File "/usr/lib/s3ql/s3ql/fsck.py", line 1002, in resolve_free
name = b'%s ... %s' % (name[0:120], name[-120:])
TypeError: unsupported operand type(s) for %: 'bytes' and 'tuple'
This is with Debian stretch s3ql 2.13+dfsg-1.
I'm tempted to simply delete the bucket and try again... does anyone
have any better suggestions?
All the best
Chris
--
Webarchitects Co-operative
http://webarchitects.coop/
+44 114 276 9709
@webarchcoop
--
You received this message because you are subscribed to the Google Groups
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.