Hi all,
I'm trying to use s3ql for multi terabyte backup, which has lot of
duplication at the block level ,
But I'm finding that since I've changed to using a machine ith debian
jessie installed that I'm getting to the situation where I get the
"Transport endpoint is not connected", error mess
local variable 'obj_id' referenced before assignment
That would be clear about something running out of diskspace, - I assume
(because I was aware of a limit on S3 accounts)
that is a local filesystem. Any idea which one? My guess would be the
metadata cache, but that seem find at th
On 7 September 2016 at 22:37, Nikolaus Rath wrote:
> On Sep 07 2016, Roger Gammans wrote:
> > apsw.CorruptError: CorruptError: database disk image is malformed
>
> That means you'll have to discard the locally cached metadata. The next
> fsck.s3ql will recover whatever w
On 9 September 2016 at 17:49, Nikolaus Rath wrote:
> >>
> >> Required metadata grows linearly with stored data. The proportionaly
> >> factor depends on how big the stored files are, and what block size you
> >> chose.
> >>
> >
> > Is that linear with size before or after de-duplication? Given we
co_send
len_ = self._sock.send(buf)
File
"/usr/lib/python3.4/ssl.py", line 678, in send
v = self._sslobj.write(data)
Thanks for all you work so far,
I've upgraded to 2.21; and started to try again ; but I got a crash after
24 hours:-
mount.s3ql[181225:Thread-6] root.excepthook: Uncaught top-level exception:
Traceback (most recent call
last):
obably-metadata-upload
)
Thanks.
On Monday, November 28, 2016 at 11:44:37 PM UTC, Nikolaus Rath wrote:
>
> On Nov 28 2016, Roger Gammans >
> wrote:
> > Thanks for all you work so far,
> >
> > I've upgraded to 2.21; and started to tr
Hi,
Is it possible to get s3qadm to create it temporary files somewhere other
than /tmp . Despite having 20G free in /tmp which isn't enough space in
/tmp to run 's3qladm upgrade' .
I don't really want to make /tmp to big on this server.
--
You received this message because you are subscrib