On Sunday, August 12, 2018 at 12:26:09 AM UTC+10, Nikolaus Rath wrote:
<.. snip...>


>
> Try the following (dangerous) patch. It will make fsck.s3ql skip the 
> actual data upload. If you then run mount.s3ql with the same cache 
> directory, it should use the local copy and you can trim it down: 
>
> diff --git a/src/s3ql/fsck.py b/src/s3ql/fsck.py 
> --- a/src/s3ql/fsck.py 
> +++ b/src/s3ql/fsck.py 
> @@ -1340,7 +1340,7 @@ 
>      param['last_fsck'] = time.time() 
>      param['last-modified'] = time.time() 
>   
> -    dump_and_upload_metadata(backend, db, param) 
> +    #dump_and_upload_metadata(backend, db, param) 
>      save_params(cachepath, param) 
>   
>      log.info('Cleaning up local metadata...') 
>
>
>
This was enough to get me online again.. then a temporary mount with option 
"--metadata-upload-interval 2629800" just to do some scripted trimming of 
older increments with s3qlrm..
then a successful unmount with where I was happy to see:

s3ql.metadata.upload_metadata: Wrote 4.33 GiB of compressed metadata.

Now I have some instances of "clone_fs.py" running so I can break this down 
into 3 separate filesystems to spread the load (and metadata) .. and backed 
out the dangerous, but useful patch to fsck..

Thanks!


-- 
You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to