Hi
On Mon 11-Apr-2016 at 06:08:02AM -0700, Alexandre Gonçalves wrote:
> >
> > I looks that you didn't upgrade the filesystem itself....
The filesystem hasn't been upgraded as a fsck is needed first, but the
fsck can't run as the filesystem hasn't been upgraded. At least that is
my understanding.
> Check the change log file.
You mean this?
2016-03-08, S3QL 2.17
* The internal file system revision has changed. File systems
created with S3QL 2.17 or newer are not compatible with prior S3QL
versions. To update an existing file system to the newest
revision, use the `s3qladm upgrade` command.
In case it helps, these are the errors in the fsck.log from before the
package was upgraded:
2016-03-31 01:13:34.896 10445:MainThread s3ql.backends.common.wrapped:
Encountered ConnectionTimedOut (send/recv timeout exceeded), retrying
Backend.copy (attempt 3)...
2016-03-31 01:13:51.387 10445:MainThread root.excepthook: Uncaught top-level
exception:
Traceback (most recent call last):
File "/usr/bin/fsck.s3ql", line 9, in <module>
load_entry_point('s3ql==2.15', 'console_scripts', 'fsck.s3ql')()
File "/usr/lib/s3ql/s3ql/fsck.py", line 1285, in main
dump_and_upload_metadata(backend, db, param)
File "/usr/lib/s3ql/s3ql/metadata.py", line 312, in dump_and_upload_metadata
upload_metadata(backend, fh, param)
File "/usr/lib/s3ql/s3ql/metadata.py", line 326, in upload_metadata
cycle_metadata(backend)
File "/usr/lib/s3ql/s3ql/metadata.py", line 125, in cycle_metadata
cycle_fn("s3ql_metadata_bak_%d" % i, "s3ql_metadata_bak_%d" % (i + 1))
File "/usr/lib/s3ql/s3ql/backends/comprenc.py", line 312, in copy
self._copy_or_rename(src, dest, rename=False, metadata=metadata)
File "/usr/lib/s3ql/s3ql/backends/comprenc.py", line 346, in _copy_or_rename
self.backend.copy(src, dest, metadata=meta_raw)
File "/usr/lib/s3ql/s3ql/backends/common.py", line 107, in wrapped
return method(*a, **kw)
File "/usr/lib/s3ql/s3ql/backends/s3c.py", line 398, in copy
resp = self._do_request('PUT', '/%s%s' % (self.prefix, dest),
headers=headers)
File "/usr/lib/s3ql/s3ql/backends/s3c.py", line 510, in _do_request
self._parse_error_response(resp)
File "/usr/lib/s3ql/s3ql/backends/s3c.py", line 544, in
_parse_error_response
raise get_S3Error(tree.findtext('Code'), tree.findtext('Message'),
resp.headers)
s3ql.backends.s3c.S3Error: ServiceUnavailable: Please reduce your request
rate.
2016-03-31 02:18:14.521 16762:MainThread root.excepthook: Uncaught top-level
exception:
Traceback (most recent call last):
File "/usr/bin/fsck.s3ql", line 9, in <module>
load_entry_point('s3ql==2.15', 'console_scripts', 'fsck.s3ql')()
File "/usr/lib/s3ql/s3ql/fsck.py", line 1285, in main
dump_and_upload_metadata(backend, db, param)
File "/usr/lib/s3ql/s3ql/metadata.py", line 312, in dump_and_upload_metadata
upload_metadata(backend, fh, param)
File "/usr/lib/s3ql/s3ql/metadata.py", line 321, in upload_metadata
metadata=param, is_compressed=True)
File "/usr/lib/s3ql/s3ql/backends/common.py", line 107, in wrapped
return method(*a, **kw)
File "/usr/lib/s3ql/s3ql/backends/common.py", line 337, in perform_write
return fn(fh)
File "/usr/lib/s3ql/s3ql/backends/comprenc.py", line 716, in __exit__
self.close()
File "/usr/lib/s3ql/s3ql/backends/comprenc.py", line 710, in close
self.fh.close()
File "/usr/lib/s3ql/s3ql/backends/common.py", line 107, in wrapped
return method(*a, **kw)
File "/usr/lib/s3ql/s3ql/backends/s3c.py", line 910, in close
headers=self.headers, body=self.fh)
File "/usr/lib/s3ql/s3ql/backends/s3c.py", line 510, in _do_request
self._parse_error_response(resp)
File "/usr/lib/s3ql/s3ql/backends/s3c.py", line 544, in
_parse_error_response
raise get_S3Error(tree.findtext('Code'), tree.findtext('Message'),
resp.headers)
s3ql.backends.s3c.S3Error: ServiceUnavailable: Please reduce your request
rate.
And then after the upgrade there is:
2016-03-31 10:00:47.765 10054:MainThread root.excepthook: File system revision
too old, please run `s3qladm upgrade` first.
In the mount.log there are these errors, from some days earlier:
2016-03-03 00:44:10.766 18589:Thread-3 root.excepthook: Uncaught top-level
exception:
Traceback (most recent call last):
File "/usr/lib/s3ql/s3ql/mount.py", line 64, in run_with_except_hook
run_old(*args, **kw)
File "/usr/lib/python3.5/threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/s3ql/s3ql/block_cache.py", line 404, in _upload_loop
self._do_upload(*tmp)
File "/usr/lib/s3ql/s3ql/block_cache.py", line 431, in _do_upload
% obj_id).get_obj_size()
File "/usr/lib/s3ql/s3ql/backends/common.py", line 107, in wrapped
return method(*a, **kw)
File "/usr/lib/s3ql/s3ql/backends/common.py", line 337, in perform_write
return fn(fh)
File "/usr/lib/s3ql/s3ql/backends/comprenc.py", line 551, in __exit__
self.close()
File "/usr/lib/s3ql/s3ql/backends/comprenc.py", line 545, in close
self.fh.close()
File "/usr/lib/s3ql/s3ql/backends/comprenc.py", line 710, in close
self.fh.close()
File "/usr/lib/s3ql/s3ql/backends/common.py", line 107, in wrapped
return method(*a, **kw)
File "/usr/lib/s3ql/s3ql/backends/s3c.py", line 910, in close
headers=self.headers, body=self.fh)
File "/usr/lib/s3ql/s3ql/backends/s3c.py", line 510, in _do_request
self._parse_error_response(resp)
File "/usr/lib/s3ql/s3ql/backends/s3c.py", line 544, in
_parse_error_response
raise get_S3Error(tree.findtext('Code'), tree.findtext('Message'),
resp.headers)
s3ql.backends.s3c.S3Error: ServiceUnavailable: Please reduce your request
rate.
2016-03-03 00:44:12.020 18589:CommitThread s3ql.mount.exchook: Unhandled
top-level exception during shutdown (will not be re-raised)
2016-03-03 00:44:12.021 18589:CommitThread root.excepthook: Uncaught
top-level exception:
Traceback (most recent call last):
File "/usr/lib/s3ql/s3ql/database.py", line 143, in get_row
row = next(res)
StopIteration
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/s3ql/s3ql/block_cache.py", line 535, in upload
block_id = self.db.get_val('SELECT id FROM blocks WHERE hash=?', (hash_,))
File "/usr/lib/s3ql/s3ql/database.py", line 127, in get_val
return self.get_row(*a, **kw)[0]
File "/usr/lib/s3ql/s3ql/database.py", line 145, in get_row
raise NoSuchRowError()
s3ql.database.NoSuchRowError: Query produced 0 result rows
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/s3ql/s3ql/mount.py", line 64, in run_with_except_hook
run_old(*args, **kw)
File "/usr/lib/s3ql/s3ql/mount.py", line 754, in run
self.block_cache.upload(el)
File "/usr/lib/s3ql/s3ql/block_cache.py", line 554, in upload
self._queue_upload((el, obj_id))
File "/usr/lib/s3ql/s3ql/block_cache.py", line 597, in _queue_upload
raise NoWorkerThreads('no upload threads')
s3ql.block_cache.NoWorkerThreads: no upload threads
2016-03-03 00:44:17.563 18589:MainThread s3ql.block_cache.destroy: Unable to
flush cache, no upload threads left alive
2016-03-03 00:44:22.572 18589:MainThread s3ql.mount.unmount: Unmounting file
system...
2016-03-03 00:44:22.786 18589:MainThread root.excepthook: Uncaught top-level
exception:
Traceback (most recent call last):
File "/usr/lib/s3ql/s3ql/database.py", line 143, in get_row
row = next(res)
StopIteration
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/s3ql/s3ql/block_cache.py", line 535, in upload
block_id = self.db.get_val('SELECT id FROM blocks WHERE hash=?', (hash_,))
File "/usr/lib/s3ql/s3ql/database.py", line 127, in get_val
return self.get_row(*a, **kw)[0]
File "/usr/lib/s3ql/s3ql/database.py", line 145, in get_row
raise NoSuchRowError()
s3ql.database.NoSuchRowError: Query produced 0 result rows
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/s3ql/s3ql/mount.py", line 214, in main
llfuse.main(options.single)
File "src/llfuse/fuse_api.pxi", line 319, in llfuse.capi.main
(src/llfuse/capi_linux.c:26545)
File "src/llfuse/handlers.pxi", line 560, in llfuse.capi.fuse_setxattr
(src/llfuse/capi_linux.c:16299)
File "src/llfuse/handlers.pxi", line 573, in llfuse.capi.fuse_setxattr
(src/llfuse/capi_linux.c:16251)
File "/usr/lib/s3ql/s3ql/fs.py", line 243, in setxattr
self.cache.clear()
File "/usr/lib/s3ql/s3ql/block_cache.py", line 928, in clear
self.expire() # Releases global lock
File "/usr/lib/s3ql/s3ql/block_cache.py", line 824, in expire
self.upload(el) # Releases global lock
File "/usr/lib/s3ql/s3ql/block_cache.py", line 554, in upload
self._queue_upload((el, obj_id))
File "/usr/lib/s3ql/s3ql/block_cache.py", line 597, in _queue_upload
raise NoWorkerThreads('no upload threads')
s3ql.block_cache.NoWorkerThreads: no upload threads
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/bin/mount.s3ql", line 9, in <module>
load_entry_point('s3ql==2.15', 'console_scripts', 'mount.s3ql')()
File "/usr/lib/s3ql/s3ql/mount.py", line 229, in main
unmount_clean = True
File "/usr/lib/python3.5/contextlib.py", line 357, in __exit__
raise exc_details[1]
File "/usr/lib/python3.5/contextlib.py", line 342, in __exit__
if cb(*exc_details):
File "/usr/lib/python3.5/contextlib.py", line 288, in _exit_wrapper
callback(*args, **kwds)
File "/usr/lib/s3ql/s3ql/block_cache.py", line 390, in destroy
os.rmdir(self.path)
OSError: [Errno 39] Directory not empty:
'/root/.s3ql/s3c:=2F=2Fs.qstack.advania.com:443=2Fcrin1=2F-cache'
And then just before the upgrade:
2016-03-31 03:09:43.096 27176:MainThread root.excepthook: Uncaught top-level
exception:
Traceback (most recent call last):
File "/usr/bin/mount.s3ql", line 9, in <module>
load_entry_point('s3ql==2.15', 'console_scripts', 'mount.s3ql')()
File "/usr/lib/s3ql/s3ql/mount.py", line 214, in main
llfuse.main(options.single)
File "src/fuse_api.pxi", line 304, in llfuse.main (src/llfuse.c:34597)
ValueError: No workers is not a good idea
All the best
Chris
--
Webarchitects Co-operative
http://webarchitects.coop/
+44 114 276 9709
@webarchcoop
--
You received this message because you are subscribed to the Google Groups
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.