On Tuesday, June 20, 2017 at 7:55:58 PM UTC-7, Nikolaus Rath wrote:
>
> On Jun 20 2017, joseph via s3ql <[email protected] <javascript:>>
> wrote:
> > jessie has s3ql 2.11.1, stretch has 2.21.
> >
> > NEWS.Debian.gz seems to indicate I need to upgrade via the intermediate
> > version 2.14, is this correct?
>
> No, this should work out of the box. Debian ships a special patch to
> enable backwards compatibility with jessie.
>
> > root@ns3022725:~# s3qladm upgrade
> > s3://echo-maher-org-uk-backup/
>
> > Getting file system parameters..
> > ERROR: Uncaught top-level exception:
> > Traceback (most recent call last):
> > File "/usr/lib/s3ql/s3ql/common.py", line 564, in thaw_basic_mapping
> > d = literal_eval(buf.decode('utf-8'))
> > UnicodeDecodeError: 'utf-8' codec can't decode byte 0x80 in position 0:
> > invalid start byte
>
> Hmm. That looks like a bug.
>
> Are you confident that the filesystem was unmounted cleanly the last
> time it was mounted? In that case you can try a workaround: temporarily
> move the "s3:=2F=2Fecho-maher-org-uk-backup*" files from ~/.s3ql/
> somewhere else and re-try the upgrade. Does that help?
>
>
mount.log seems to think so - I have appended an extract from mount.log to
the end of this email. I'm happy to send the whole file, but it is >100K.
Before I received your response I upgraded a jessie install to 2.14 and ran:
s3qladm upgrade s3://echo-maher-org-uk-backup/
but this failed with (I think) a timeout:
Encountered ConnectionClosed (server closed connection), retrying
Backend.lookup (attempt 4)...
..processed 901724/2215676 objects (40.7%, 0 bytes rewritten)..Encountered
HTTPError (500 Internal Server Error), retrying Backend.lookup (attempt
5)...
..processed 901762/2215676 objects (40.7%, 0 bytes rewritten)..Encountered
ConnectionClosed (server closed connection), retrying Backend.lookup
(attempt 6)...
..processed 1036241/2215676 objects (46.8%, 0 bytes rewritten)..Encountered
HTTPError (500 Internal Server Error), retrying Backend.lookup (attempt
3)...
Encountered ConnectionClosed (server closed connection), retrying
Backend.lookup (attempt 4)...
..processed 1303715/2215676 objects (58.8%, 0 bytes rewritten)..Uncaught
top-level exception:
Traceback (most recent call last):
File "/usr/bin/s3qladm", line 9, in <module>
load_entry_point('s3ql==2.14', 'console_scripts', 's3qladm')()
File "/usr/lib/s3ql/s3ql/adm.py", line 91, in main
return upgrade(options)
File "/usr/lib/s3ql/s3ql/common.py", line 514, in wrapper
return fn(*a, **kw)
File "/usr/lib/s3ql/s3ql/adm.py", line 320, in upgrade
update_obj_metadata(backend, backend_factory, db, options.threads)
File "/usr/lib/s3ql/s3ql/adm.py", line 397, in update_obj_metadata
t.join_and_raise()
File "/usr/lib/s3ql/s3ql/common.py", line 468, in join_and_raise
raise EmbeddedException(exc_info, self.name)
s3ql.common.EmbeddedException: caused by an exception in thread Thread-5.
Original/inner traceback (most recent call last):
Traceback (most recent call last):
File "/usr/lib/s3ql/s3ql/common.py", line 447, in run
self.run_protected()
File "/usr/lib/s3ql/s3ql/common.py", line 498, in run_protected
self.target(*self.args, **self.kwargs)
File "/usr/lib/s3ql/s3ql/adm.py", line 449, in upgrade_loop
plain_backend.update_meta(obj_id, meta)
File "/usr/lib/s3ql/s3ql/backends/s3c.py", line 422, in update_meta
self.copy(key, key, metadata)
File "/usr/lib/s3ql/s3ql/backends/s3.py", line 94, in copy
extra_headers=extra_headers)
File "/usr/lib/s3ql/s3ql/backends/common.py", line 107, in wrapped
return method(*a, **kw)
File "/usr/lib/s3ql/s3ql/backends/s3c.py", line 398, in copy
resp = self._do_request('PUT', '/%s%s' % (self.prefix, dest),
headers=headers)
File "/usr/lib/s3ql/s3ql/backends/s3c.py", line 440, in _do_request
query_string=query_string, body=body)
File "/usr/lib/s3ql/s3ql/backends/s3c.py", line 700, in _send_request
return read_response()
File "/usr/lib/s3ql/s3ql/backends/s3c.py", line 662, in read_response
resp = self.conn.read_response()
File "/usr/lib/python3/dist-packages/dugong/__init__.py", line 765, in
read_response
return eval_coroutine(self.co_read_response(), self.timeout)
File "/usr/lib/python3/dist-packages/dugong/__init__.py", line 1496, in
eval_coroutine
if not next(crt).poll(timeout=timeout):
File "/usr/lib/python3/dist-packages/dugong/__init__.py", line 781, in
co_read_response
raise StateError('Previous response not read completely')
dugong.StateError: Previous response not read completely
Is there any way to recover from this?
> Do you still have access to a jessie system? If so, can you reproduce
> the problem with a freshly created file system?
>
>
I can't reproduce - I just made a new filesystem on a jessie box and it
upgraded just fine on the stretch box.
Joseph
extract from mount.log on the stretch box (upgraded from jessie on
2017-06-21):
2017-06-03 23:50:30.669 19515:Metadata-Upload-Thread
(name)s.dump_metadata: ..contents..
2017-06-03 23:53:15.081 19515:Metadata-Upload-Thread (name)s.dump_metadata:
..ext_attributes..
2017-06-03 23:53:16.953 19515:Metadata-Upload-Thread (name)s.run:
Compressing and uploading metadata...
2017-06-04 00:13:44.059 19515:Metadata-Upload-Thread (name)s.run: Wrote 925
MiB of compressed metadata.
2017-06-04 00:13:44.067 19515:Metadata-Upload-Thread (name)s.run: Cycling
metadata backups...
2017-06-04 00:13:44.067 19515:Metadata-Upload-Thread
(name)s.cycle_metadata: Backing up old metadata...
2017-06-04 02:32:31.516 19515:MainThread (name)s.main: FUSE main loop
terminated.
2017-06-04 02:32:31.662 19515:MainThread (name)s.unmount: Unmounting file
system...
2017-06-04 02:32:33.487 19515:MainThread (name)s.main: Dumping metadata...
2017-06-04 02:32:33.488 19515:MainThread (name)s.dump_metadata: ..objects..
2017-06-04 02:32:35.140 19515:MainThread (name)s.dump_metadata: ..blocks..
2017-06-04 02:32:49.262 19515:MainThread (name)s.dump_metadata: ..inodes..
2017-06-04 02:36:45.709 19515:MainThread (name)s.dump_metadata:
..inode_blocks..
2017-06-04 02:38:49.755 19515:MainThread (name)s.dump_metadata:
..symlink_targets..
2017-06-04 02:38:54.000 19515:MainThread (name)s.dump_metadata: ..names..
2017-06-04 02:38:56.525 19515:MainThread (name)s.dump_metadata: ..contents..
2017-06-04 02:41:33.006 19515:MainThread (name)s.dump_metadata:
..ext_attributes..
2017-06-04 02:41:33.007 19515:MainThread (name)s.main: Compressing and
uploading metadata...
2017-06-04 02:53:48.075 19515:MainThread (name)s.main: Wrote 926 MiB of
compressed metadata.
2017-06-04 02:53:48.075 19515:MainThread (name)s.main: Cycling metadata
backups...
2017-06-04 02:53:48.075 19515:MainThread (name)s.cycle_metadata: Backing up
old metadata...
2017-06-04 02:56:25.040 19515:MainThread (name)s.main: Cleaning up local
metadata...
2017-06-04 03:08:45.675 19515:MainThread (name)s.main: All done.
2017-06-21 02:17:11.196 2477:MainThread s3ql.mount.determine_threads: Using
10 upload threads.
2017-06-21 02:17:11.207 2477:MainThread s3ql.mount.main: Autodetected 65474
file descriptors available for cache entries
2017-06-21 02:17:11.981 2477:MainThread root.excepthook: Uncaught top-level
exception:
Traceback (most recent call last):
File "/usr/bin/mount.s3ql", line 11, in <module>
load_entry_point('s3ql==2.21', 'console_scripts', 'mount.s3ql')()
File "/usr/lib/s3ql/s3ql/mount.py", line 120, in main
options.authfile, options.compress)
File "/usr/lib/s3ql/s3ql/common.py", line 340, in get_backend_factory
backend.fetch('s3ql_passphrase')
File "/usr/lib/s3ql/s3ql/backends/common.py", line 354, in fetch
return self.perform_read(do_read, key)
File "/usr/lib/s3ql/s3ql/backends/common.py", line 108, in wrapped
return method(*a, **kw)
File "/usr/lib/s3ql/s3ql/backends/common.py", line 317, in perform_read
fh = self.open_read(key)
File "/usr/lib/s3ql/s3ql/backends/common.py", line 108, in wrapped
return method(*a, **kw)
File "/usr/lib/s3ql/s3ql/backends/s3c.py", line 337, in open_read
meta = self._extractmeta(resp, key)
File "/usr/lib/s3ql/s3ql/backends/s3c.py", line 772, in _extractmeta
raise CorruptedObjectError('Invalid metadata format: %s' % format_)
s3ql.backends.common.CorruptedObjectError: Invalid metadata format: pickle
2017-06-21 02:19:33.075 2669:MainThread s3ql.mount.determine_threads: Using
10 upload threads.
2017-06-21 02:19:33.075 2669:MainThread s3ql.mount.main: Autodetected 65474
file descriptors available for cache entries
2017-06-21 02:19:33.900 2669:MainThread root.excepthook: Uncaught top-level
exception:
Traceback (most recent call last):
File "/usr/bin/mount.s3ql", line 11, in <module>
load_entry_point('s3ql==2.21', 'console_scripts', 'mount.s3ql')()
File "/usr/lib/s3ql/s3ql/mount.py", line 120, in main
options.authfile, options.compress)
--
You received this message because you are subscribed to the Google Groups
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.