I'm running S3QL 3.0 (Debian) on a locally hosted filesystem, so no S3
involved this time.
Unfortunately over the weekend my host disk filled up. I've since extended
the disk but I still can't get my S3QL filesystem to recover.
The original error was triggered through the mounted filesystem here:
2019-06-15 13:26:28.724 2293:Metadata-Upload-Thread s3ql.mount.run: Dumping
metadata...
2019-06-15 13:26:29.542 2293:Metadata-Upload-Thread s3ql.metadata.
dump_metadata: ..objects..
2019-06-15 13:27:09.527 2293:Metadata-Upload-Thread s3ql.metadata.
dump_metadata: ..blocks..
2019-06-15 13:31:16.480 2293:Metadata-Upload-Thread s3ql.metadata.
dump_metadata: ..inodes..
2019-06-15 13:34:34.279 2293:Metadata-Upload-Thread s3ql.metadata.
dump_metadata: ..inode_blocks..
2019-06-15 13:36:28.091 2293:Metadata-Upload-Thread s3ql.metadata.
dump_metadata: ..symlink_targets..
2019-06-15 13:36:28.111 2293:Metadata-Upload-Thread s3ql.metadata.
dump_metadata: ..names..
2019-06-15 13:37:50.069 2293:Metadata-Upload-Thread s3ql.metadata.
dump_metadata: ..contents..
2019-06-15 13:40:28.757 2293:Metadata-Upload-Thread s3ql.metadata.
dump_metadata: ..ext_attributes..
2019-06-15 13:43:58.038 2293:Metadata-Upload-Thread s3ql.metadata.
upload_metadata: Compressing and uploading metadata...
2019-06-15 13:46:04.242 2293:Metadata-Upload-Thread s3ql.metadata.
upload_metadata: Wrote 73.5 MiB of compressed metadata.
2019-06-15 13:46:04.244 2293:Metadata-Upload-Thread s3ql.metadata.
upload_metadata: Cycling metadata backups...
2019-06-15 13:46:04.245 2293:Metadata-Upload-Thread s3ql.metadata.
cycle_metadata: Backing up old metadata...
2019-06-16 06:12:54.030 2293:Thread-9 s3ql.mount.exchook: Unhandled top-level
exception during shutdown (will not be re-raised)
2019-06-16 06:12:53.817 2293:Thread-5 root.excepthook: Uncaught top-level
exception:
Traceback (most recent call last):
File "/usr/lib/s3ql/s3ql/backends/common.py", line 279, in perform_write
return fn(fh)
File "/usr/lib/s3ql/s3ql/block_cache.py", line 457, in do_write
fh.write(buf)
File "/usr/lib/s3ql/s3ql/backends/comprenc.py", line 370, in write
self.fh.write(buf)
File "/usr/lib/s3ql/s3ql/backends/local.py", line 323, in write
self.fh.write(buf)
OSError: [Errno 28] No space left on device
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/usr/lib/s3ql/s3ql/mount.py", line 58, in run_with_except_hook
run_old(*args, **kw)
File "/usr/lib/python3.5/threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "/usr/lib/s3ql/s3ql/block_cache.py", line 445, in _upload_loop
self._do_upload(*tmp)
File "/usr/lib/s3ql/s3ql/block_cache.py", line 472, in _do_upload
% obj_id).get_obj_size()
File "/usr/lib/s3ql/s3ql/backends/common.py", line 108, in wrapped
return method(*a, **kw)
File "/usr/lib/s3ql/s3ql/backends/common.py", line 279, in perform_write
return fn(fh)
File "/usr/lib/s3ql/s3ql/backends/comprenc.py", line 389, in __exit__
self.close()
File "/usr/lib/s3ql/s3ql/backends/comprenc.py", line 380, in close
self.fh.write(buf)
File "/usr/lib/s3ql/s3ql/backends/local.py", line 323, in write
self.fh.write(buf)
OSError: [Errno 28] No space left on device
2019-06-16 06:12:54.179 2293:Thread-7 s3ql.mount.exchook: Unhandled top-level
exception during shutdown (will not be re-raised)
2019-06-16 06:12:54.329 2293:Thread-9 root.excepthook: Uncaught top-level
exception:
I have an automatic fsck that gets triggered whenever an S3QL mount fails
(typically from "transport not connected" errors) and it fired even though
this was a local disk:
2019-06-17 09:37:12.780 24686:MainThread s3ql.fsck.main: Starting fsck of
local:///var/autofs/misc/s3ql/field/
2019-06-17 09:37:12.833 24686:MainThread s3ql.fsck.main: Using cached
metadata.
2019-06-17 09:37:12.834 24686:MainThread s3ql.fsck.main: Remote metadata is
outdated.
2019-06-17 09:37:12.834 24686:MainThread s3ql.fsck.main: Checking DB
integrity...
2019-06-17 10:12:35.967 24686:MainThread root.excepthook: Uncaught top-level
exception:
Traceback (most recent call last):
File "/usr/bin/fsck.s3ql", line 11, in <module>
load_entry_point('s3ql==3.0', 'console_scripts', 'fsck.s3ql')()
File "/usr/lib/s3ql/s3ql/fsck.py", line 1269, in main
backend['s3ql_seq_no_%d' % param['seq_no']] = b'Empty'
File "/usr/lib/s3ql/s3ql/backends/common.py", line 197, in __setitem__
self.store(key, value)
File "/usr/lib/s3ql/s3ql/backends/common.py", line 307, in store
self.perform_write(lambda fh: fh.write(val), key, metadata)
File "/usr/lib/s3ql/s3ql/backends/common.py", line 108, in wrapped
return method(*a, **kw)
File "/usr/lib/s3ql/s3ql/backends/common.py", line 278, in perform_write
with self.open_write(key, metadata, is_compressed) as fh:
File "/usr/lib/s3ql/s3ql/backends/comprenc.py", line 274, in open_write
fh = self.backend.open_write(key, meta_raw)
File "/usr/lib/s3ql/s3ql/backends/local.py", line 107, in open_write
dest.write(b's3ql_1\n')
File "/usr/lib/s3ql/s3ql/backends/local.py", line 323, in write
self.fh.write(buf)
OSError: [Errno 28] No space left on device
I've now extended the underlying disk, but I'm being told that the S3QL
filesystem is still mounted elsewhere. It isn't - I can guarantee this is
the only system that has access to the host storage, because it's a local
disk. If I run the fsck manually I get told the locally cached metadata is
out of date, and asked to confirm I want to wipe it out. I really don't
want to do that because I've got over 600MB still in the local cache
waiting to be uploaded to the S3QL filesystem. The cache is on a different
disk to the S3QL filesystem so I'm really not sure why the local cache is
considered out of date with respect to the remote.
Help please!
Thanks,
Chris
--
You received this message because you are subscribed to the Google Groups
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/s3ql/0aa53b80-30a1-4ea4-888e-f4e227b8b77f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.