Guys,
S3ql, version 2; is still not proving stable for me, early in the week in
crashed with an empty mount.log, so I've moved logging to to syslog, but
I've just received another failure backtrace. Here the log details leading
up to the error..
Sep 22 13:31:31 justin mount.s3ql[190630]:
mount.s3ql[190630:Metadata-Upload-Thread] s3ql.mount.run: Dumping
metadata...
Sep 22 13:31:31 justin mount.s3ql[190630]:
mount.s3ql[190630:Metadata-Upload-Thread] s3ql.metadata.dump_metadata:
..objects..
Sep 22 13:31:31 justin mount.s3ql[190630]:
mount.s3ql[190630:Metadata-Upload-Thread] s3ql.metadata.dump_metadata:
..blocks..
Sep 22 13:31:32 justin mount.s3ql[190630]:
mount.s3ql[190630:Metadata-Upload-Thread] s3ql.metadata.dump_metadata:
..inodes..
Sep 22 13:32:27 justin mount.s3ql[190630]:
mount.s3ql[190630:Metadata-Upload-Thread] s3ql.metadata.dump_metadata:
..inode_blocks..
Sep 22 13:33:04 justin mount.s3ql[190630]:
mount.s3ql[190630:Metadata-Upload-Thread] s3ql.metadata.dump_metadata:
..symlink_targets..
Sep 22 13:33:04 justin mount.s3ql[190630]:
mount.s3ql[190630:Metadata-Upload-Thread] s3ql.metadata.dump_metadata:
..names..
Sep 22 13:33:05 justin mount.s3ql[190630]:
mount.s3ql[190630:Metadata-Upload-Thread] s3ql.metadata.dump_metadata:
..contents..
Sep 22 13:33:39 justin mount.s3ql[190630]:
mount.s3ql[190630:Metadata-Upload-Thread] s3ql.metadata.dump_metadata:
..ext_attributes..
Sep 22 13:33:40 justin mount.s3ql[190630]:
mount.s3ql[190630:Metadata-Upload-Thread] s3ql.mount.run: Compressing and
uploading metadata...
Sep 22 13:33:42 justin mount.s3ql[190630]: mount.s3ql[190630:Thread-3]
root.excepthook: Uncaught top-level exception:
Traceback (most recent call
last):
File
"/usr/lib/s3ql/s3ql/mount.py", line 66, in run_with_except_hook
run_old(*args, **kw)
File
"/usr/lib/python3.4/threading.py", line 868, in run
self._target(*self._args,
**self._kwargs)
File
"/usr/lib/s3ql/s3ql/block_cache.py", line 404, in _upload_loop
self._do_upload(*tmp)
File
"/usr/lib/s3ql/s3ql/block_cache.py", line 431, in _do_upload
% obj_id).get_obj_size()
File
"/usr/lib/s3ql/s3ql/backends/common.py", line 46, in wrapped
return method(*a, **kw)
File
"/usr/lib/s3ql/s3ql/backends/common.py", line 258, in perform_write
return fn(fh)
File
"/usr/lib/s3ql/s3ql/backends/comprenc.py", line 477, in __exit__
self.close()
File
"/usr/lib/s3ql/s3ql/backends/comprenc.py", line 471, in close
self.fh.close()
File
"/usr/lib/s3ql/s3ql/backends/comprenc.py", line 636, in close
self.fh.close()
File
"/usr/lib/s3ql/s3ql/backends/common.py", line 46, in wrapped
return method(*a, **kw)
File
"/usr/lib/s3ql/s3ql/backends/s3c.py", line 845, in close
headers=self.headers,
body=self.fh)
File
"/usr/lib/s3ql/s3ql/backends/s3c.py", line 409, in _do_request
query_string=query_string,
body=body)
File
"/usr/lib/s3ql/s3ql/backends/s3c.py", line 642, in _send_request
headers=headers,
body=BodyFollowing(body_len))
File
"/usr/lib/python3/dist-packages/dugong/__init__.py", line 477, in
send_request
self.timeout)
File
"/usr/lib/python3/dist-packages/dugong/__init__.py", line 1361, in
eval_coroutine
if not
next(crt).poll(timeout=timeout):
File
"/usr/lib/python3/dist-packages/dugong/__init__.py", line 568, in
co_send_request
yield from self._co_send(buf)
File
"/usr/lib/python3/dist-packages/dugong/__init__.py", line 584, in _co_send
len_ = self._sock.send(buf)
File
"/usr/lib/python3.4/ssl.py", line 678, in send
v = self._sslobj.write(data)
ssl.SSLError: [SSL:
BAD_WRITE_RETRY] bad write retry (_ssl.c:1638)
Is this likely to be issue 87 ?
On Wednesday, September 14, 2016 at 10:44:57 AM UTC+1, Roger Gammans wrote:
>
>
>
> On 9 September 2016 at 17:49, Nikolaus Rath <[email protected]> wrote:
>
>> >>
>> >> Required metadata grows linearly with stored data. The proportionaly
>> >> factor depends on how big the stored files are, and what block size you
>> >> chose.
>> >>
>> >
>> > Is that linear with size before or after de-duplication? Given we have
>> > mulitple backup snapshots create with s3qlcp it makes a big
>> > difference.
>>
>> Both. De-duplicated data takes a little less metadata than unique data,
>> but still scales linearly.
>>
>
> Thanks, that is helpful. I think I've got a handle on what is happening
> now. It looks like
> toe cache LV is a little smaller than necessary and a hotcopy snapshot
> process consumed the rest .
>
>
>
>
--
You received this message because you are subscribed to the Google Groups
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.