On Dec 21 2021, "Brian C. Hill" <[email protected]> wrote:
> Hello,
>
> I am using CentOS 7 and s3ql 3.8.0 (though my fs was origiinally created with
> 3.7.1, I
> think).
>
> s3sql_verify reported this:
>
> WARNING: Object 2076394 is corrupted (expected size 258048, actual size
> 256871)
>
> I assume that I need to use fix_block_sizes.py to fix that,
This is not what the tool was intended for. fix_block_sizes.py was
designed to fix one specific problem caused by a bug in previous S3QL
versions that resulted in files being null-padded to the next 512 byte
boundary (i.e., the metadata indicates a larger size than what is
physically stored).
In your case, the stored data seems to be 1177 byte longer than what the
metadata says. In other words, this is either a different S3QL bug, or
the block was corrupted on the remote server.
fix_block_sizes.py will simply update the metadata to match the physical
size and thereby get rid of the padding. I am not sure what this would
do in your case - you may end up appending bogus data to a file or
loosing valuable data. The safer choice would be to remove the damaged
object, after which fsck.s3ql will tell you what files may need to be
recovered from elsewhere.
> but I don't see any
> documentation for fix_block_sizes.py, and it doesn't provide a 'usage'
> summary when run
> without arguments.
Just pass it the storage url, e.g. fix_block_sizes.py s3://foobar
> Can fsck.s3ql not fix that problem?
No, fsck.s3ql only checks for logical consistency, it does not attempt
to download the entire filesystem data (which is what s3ql_verify doese).
Best,
-Nikolaus
--
GPG Fingerprint: ED31 791B 2C5C 1613 AF38 8B8A D113 FCAC 3C4E 599F
»Time flies like an arrow, fruit flies like a Banana.«
--
You received this message because you are subscribed to the Google Groups
"s3ql" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to [email protected].
To view this discussion on the web visit
https://groups.google.com/d/msgid/s3ql/87czliclms.fsf%40vostro.rath.org.