One anomaly that I saw in the initial cache directory is some s3*
files related to this gs* bucket.

fsck.log                      s3:=2F=2Fcust-company=2Fcust-backup01=2F.db
gs:=2F=2Fcust-backup01-cache  s3:=2F=2Fcust-company=2Fcust-backup01=2F.params
gs:=2F=2Fcust-backup01.db     s3:=2F=2Fcust-company=2Fcust-backup01.db
gs:=2F=2Fcust-backup01.params s3:=2F=2Fcust-company=2Fcust-backup01.params
mount.log

It seemed to me that the s3* metadata files were probably not valid,
so first I made a tar of these files, and cleared the cache before
applying the downloaded metadata.   Should I restore some of those
files before re-running the fsck?



On Tue, Aug 9, 2016 at 10:07 AM, Andy Cress <[email protected]> wrote:
> Aha, that does proceed, but it looks like it is deleting nearly every
> s3ql_data_ block with 'Deleted spurious object nnn', so I interrupted
> it.  Debug shows a sequence like that below, repeated.  Is this
> ok/valid?
> Is there a way to validate the metadata?
>
> 2016-08-09 09:58:06.201 [9275] MainThread: [backends.s3c]
> list(s3ql_data_): requesting with marker=
> 2016-08-09 09:58:06.201 [9275] MainThread: [backends.s3c]
> _do_request(): start with parameters ('GET', '/', None, {'marker': '',
> 'prefix': 's3ql_data_', 'max-keys': 1000}, None, None)
> 2016-08-09 09:58:06.201 [9275] MainThread: [backends.s3c]
> _send_request(): processing request for
> /?marker=&prefix=s3ql_data_&max-keys=1000
> 2016-08-09 09:58:06.443 [9275] MainThread: [backends.s3c]
> _do_request(): request-id: None
> 2016-08-09 09:58:06.599 [9275] MainThread: [backends.s3c] 
> delete(s3ql_data_280)
> 2016-08-09 09:58:06.599 [9275] MainThread: [backends.s3c]
> _do_request(): start with parameters ('DELETE', '/s3ql_data_280',
> None, None, None, None)
> 2016-08-09 09:58:06.599 [9275] MainThread: [backends.s3c]
> _send_request(): processing request for /s3ql_data_280
> 2016-08-09 09:58:06.843 [9275] MainThread: [backends.s3c]
> _do_request(): request-id: None
> 2016-08-09 09:58:06.843 [9275] MainThread: [fsck] Deleted spurious object 280
>
>
> On Mon, Aug 8, 2016 at 7:18 PM, Nikolaus Rath <[email protected]> wrote:
>> On Aug 08 2016, Andy Cress <[email protected]> wrote:
>>> I encountered this problem on one particular bucket and cannot seem to
>>> recover the s3ql filesystem. This system had been running for some
>>> time with s3ql-1.17 just fine, but now is stuck.  I'm hoping that you
>>> can tell me how to use the backup metadata to get it going again.
>> [...]
>>>
>>> I did notice that s3qladm download-metadata failed (with the same
>>> primary key error) for no 0, but succeeds for no 1 below.  I’m
>>> thinking that perhaps I can upload the s3ql_metadata_bak_0 instead of
>>> s3ql_metadata_bak, but if so, I need to know the procedure for that.
>>
>> Have you tried to simply run fsck.s3ql after s3qladm download-metadata?
>> This should make fsck.s3ql to use the copy that you manually downloaded 
>> before.
>> Just make sure to use the same cache directory.
>>
>> Best,
>> -Nikolaus
>>
>> --
>> GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
>> Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F
>>
>>              »Time flies like an arrow, fruit flies like a Banana.«
>>
>> --
>> You received this message because you are subscribed to the Google Groups 
>> "s3ql" group.
>> To unsubscribe from this group and stop receiving emails from it, send an 
>> email to [email protected].
>> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
For more options, visit https://groups.google.com/d/optout.

Reply via email to