On Saturday, August 11, 2018 at 9:34:54 PM UTC+10, drobert...@gmail.com 
> On Saturday, August 11, 2018 at 7:34:29 PM UTC+10, Nikolaus Rath wrote:
>> On Aug 10 2018, drobert...@gmail.com wrote: 
>> > But a few days ago the mount went offline with the error: 
>> *s3ql.backends.s3c.HTTPError: 
>> > 413 Request Entity Too Large* 
>> [...] 
>> > s3ql.metadata.upload_metadata: Compressing and uploading metadata... 
>> > 2018-08-08 15:41:03.977 18159:Metadata-Upload-Thread root.excepthook: 
>> >     raise HTTPError(resp.status, resp.reason, resp.headers) 
>> > s3ql.backends.s3c.HTTPError: 413 Request Entity Too Large 
>> Most likely your metadata object has exceeded the maximum size allowed 
>> by the server (this means that s3ql_verify will not show the problem, 
>> because it does not upload any metadata). 
>> Unfortunately there is currently no workaround for this. See 
>> https://bitbucket.org/nikratio/s3ql/issues/266/support-metadata-5-gb 
> Is there a way to force a mount of the existing data using local metadata 
> so I can trim down the data until the meta data is smaller - I was keeping 
> a lot of extra copies "because I could" - completely unaware I was running 
> into a brick wall - or use the temp. mount to re-factor this into a set of 
> smaller S3QL filesystems instead of one larger one?
> So it seems my fsck.s3ql working with the local metadata succeeds.. it 
> just fails when it uploads it to OVH..

Or a way to copy this data to a storage method (any storage method!) 
without the 5G metadata limit (sshfs? s3? something?)  so it can be 
accessed? I'm assuming clone_fs.py as it is written will use the stored 

You received this message because you are subscribed to the Google Groups 
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to s3ql+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to