On Jul 01 2016, Peter Auyeung <[email protected]> wrote:
>> On Jul 1, 2016, at 1:43 PM, Nikolaus Rath <[email protected]> wrote:
>>
>> On Jul 01 2016, Peter Auyeung <[email protected]> wrote:
>>>>>> On Jun 29 2016, Peter Auyeung <[email protected]> wrote:
>>>>>> On Wednesday, June 29, 2016 at 11:53:06 AM UTC-7, Nikolaus Rath wrote:
>>>>>> On Jun 29 2016, Peter Auyeung <[email protected] <javascript:>>
>>>>>> wrote:
>>>>>>> Is there a way to backup and restore metadata of s3ql?
>>>>>>
>>>>>> You could copy/restore the s3ql_metadata object in your storage backend,
>>>>>> or the *.db file in your --cachedir.
>>>>>
>>>>> I did a copy and restore of the backend of s3ql on local storage and
>>>>> getting the following error:
>>>>>
>>>>> # mount.s3ql local:///ntap4/restore/ /s3ql/restore/
>>>>> Using 10 upload threads.
>>>>> Autodetected 4034 file descriptors available for cache entries
>>>>> Enter file system encryption passphrase:
>>>>> ERROR: Uncaught top-level exception:
>>>>> Traceback (most recent call last):
>>>>> File "/usr/bin/mount.s3ql", line 9, in <module>
>>>>> load_entry_point('s3ql==2.18', 'console_scripts', 'mount.s3ql')()
>>>>> File "/usr/lib/s3ql/s3ql/mount.py", line 129, in main
>>>>> (param, db) = get_metadata(backend, cachepath)
>>>>> File "/usr/lib/s3ql/s3ql/mount.py", line 374, in get_metadata
>>>>> param = backend.lookup('s3ql_metadata')
>>>>> File "/usr/lib/s3ql/s3ql/backends/comprenc.py", line 77, in lookup
>>>>> return self._verify_meta(key, meta_raw)[1]
>>>>> File "/usr/lib/s3ql/s3ql/backends/comprenc.py", line 137, in _verify_meta
>>>>> raise CorruptedObjectError('HMAC mismatch')
>>>>> s3ql.backends.common.CorruptedObjectError: HMAC mismatch
>>>>
>>>>
>>>> Well.. you did something wrong. Maybe you copied one of the
>>>> s3ql_metadata_bak objects to s3ql_metadata? This will give you a
>>>> checksum error.
>>>>
>>>> That said, I think you should explain what problem you are trying to
>>>> solve. I don't think what you're trying to do is the right solution.
>>> I am trying to backup a s3ql to google clould either on s3ql or not
>>
>> This doesn't make sense. If you're trying to backup to Google cloud,
>> what are you trying to do above? Clearly you're trying to mount
>> something from local storage.
>>
> That would be s3ql on local storage
> Trying to backup locally to s3ql and snapshots and replicate to google
So why are you talking about metadata then? You should be replicating
the entire folder that you pass to the local backend.
Also, if you correctly backup and restore then by definition S3QL can't
even tell that anything happened. If after the restore S3QL doesn't work
anymore, then that's not a problem with S3QL but with your
backup/restore procedure.
>>> I was trying to rsync s3ql locally to the one on google storage
>>
>> You can do that, but then you won't be able to mount the file system
>> using the Google Storage backend (the backends use different file
>> formats). contrib/clone_fs.py can do the conversion though.
>>
> Does clone_fs.py incrementally sync two s3ql?
No.
> I am trying to reduce the cloud put get traffic during incremental
Then you should be working with two mounted S3QL file systems (one
using the local backend and one using the Google Storage backend) and do
the synchronization using e.g. rsync.
Best,
-Nikolaus
--
GPG encrypted emails preferred. Key id: 0xD113FCAC3C4E599F
Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F
»Time flies like an arrow, fruit flies like a Banana.«
--
You received this message because you are subscribed to the Google Groups
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
For more options, visit https://groups.google.com/d/optout.