Hi,
We use version 0.60. We use the .Net version of the AWS SDK. So basically what
we do is to copyobject in same bucket with same key but we add metadata.
This only affects files greater than 512kb.
var copyRequest = new CopyObjectRequest()
.WithDirective(S3MetadataDirective.REPLACE)
.WithSourceKey(key)
.WithSourceBucket(HtmlBucket)
.WithDestinationKey(key)
.WithDestinationBucket(HtmlBucket)
.WithMetaData(MetaHtmlLocations, metaLocation);
--
Yann
-----Original Message-----
From: Yehuda Sadeh [mailto:[email protected]]
Sent: lundi 22 avril 2013 16:45
To: Yann ROBIN
Cc: [email protected]
Subject: Re: [ceph-users] Radosgw using s3 copy corrupt files
On Mon, Apr 22, 2013 at 1:03 AM, Yann ROBIN <[email protected]> wrote:
> Hi,
>
>
>
> We use radosgw and s3 API and we recently needed to update metadata on
> some files.
>
> So we used the copy part of the S3 API for an in place replacement of
> the file adding some meta.
>
>
>
> We quickly saw some very high response time for some of those uploaded file.
> But there was no slow request.
>
> We looked at some file and saw that file greater than 512kb were corrupted.
>
> After 512kb the content of the files is :
>
> Status: 404
>
> Content-Length: 75
>
> Accept-Ranges: bytes
>
> Content-type: application/xml
>
>
>
> <?xml version="1.0"
> encoding="UTF-8"?><Error><Code>NoSuchKey</Code></Error>
>
>
What version are you using? I tried to reproduce that but couldn't. I vaguely
remember some fixes in that area that went in, but that was a while back.
If you can reproduce it then rgw log with debug rgw = 20, debug ms = 1 could
assist here.
Thanks,
Yehuda
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com