On Feb 27 2016, Chris <[email protected]> wrote:
> With the new release of s3ql 2.16, trying to run s3ql.fsck. Every time
> without fail, when it gets to the end and tries to upload the metadata, it
> times out like such:
>
> Compressing and uploading metadata...
>> Wrote 15.1 MiB of compressed metadata.
>> Cycling metadata backups...
>> Backing up old metadata...
>> Encountered ConnectionTimedOut (send/recv timeout exceeded), retrying
>> Backend._copy_helper (attempt 3)...
>> Encountered ConnectionTimedOut (send/recv timeout exceeded), retrying
>> Backend._copy_helper (attempt 4)...
>> Encountered ConnectionTimedOut (send/recv timeout exceeded), retrying
>> Backend._copy_helper (attempt 5)...
>> Encountered ConnectionTimedOut (send/recv timeout exceeded), retrying
>> Backend._copy_helper (attempt 6)...
>
>
> And it will do this for eternity and never succeed.
>
> However, if I drop back to s3ql 2.15 the fsck will succeed.
>
> Any ideas?
Are you sure? I don't think any of the changes from 2.15 to 2.16 touch
the code involved in this. Also, when dropping back to 2.15, did you
also downgrade python-llfuse accordingly?
Does increasing the timeout (backend option) help?
Best,
-Nikolaus
--
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F
»Time flies like an arrow, fruit flies like a Banana.«
--
You received this message because you are subscribed to the Google Groups
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.