No, but I've seen it in RadosGW too.  I've been meaning to post about it.
I get about ten a day, out of about 50k objects/day.


clewis@clewis-mac ~ (-) $ s3cmd ls s3://live32/ | grep '1970-01' | head -1
1970-01-01 00:00         0
s3://live-32/39020f17716a18b39efd8daa96e8245eb2901f353ba1004e724cb56de5367055

Also note the 0 byte filesize and the lack of MD5 checksum.  What I find
interesting is that I can fix this by downloading the file and uploading it
again.  The filename is a SHA256 hash of the file contents, and the file
downloads correctly every time.  I never see this in my replication
cluster, only the primary cluster.


The access look looks kind of interesting for this file:
192.168.2.146 - - [23/Mar/2015:20:11:30 -0700] "PUT
/39020f17716a18b39efd8daa96e8245eb2901f353ba1004e724cb56de5367055 HTTP/1.1"
500 722 "-" "aws-sdk-php2/2.7.20 Guzzle/3.9.2 curl/7.40.0 PHP/5.5.21"
"live-32.us-west-1.ceph.cdlocal"
192.168.2.146 - - [23/Mar/2015:20:12:01 -0700] "PUT
/39020f17716a18b39efd8daa96e8245eb2901f353ba1004e724cb56de5367055 HTTP/1.1"
200 205 "-" "aws-sdk-php2/2.7.20 Guzzle/3.9.2 curl/7.40.0 PHP/5.5.21"
"live-32.us-west-1.ceph.cdlocal"
192.168.2.146 - - [23/Mar/2015:20:12:09 -0700] "HEAD
/39020f17716a18b39efd8daa96e8245eb2901f353ba1004e724cb56de5367055 HTTP/1.1"
200 250 "-" "aws-sdk-php2/2.7.20 Guzzle/3.9.2 curl/7.40.0 PHP/5.5.21"
"live-32.us-west-1.ceph.cdlocal"

31 seconds is a big spread between the initial PUT and the second PUT.  The
file is only 43k, so it'll be using the direct PUT, not multi-part upload.
I haven't verified this for all of them.

There's nothing in radosgw.log at the time of the 500.





On Wed, Apr 1, 2015 at 2:42 AM, Jimmy Goffaux <ji...@goffaux.fr> wrote:

> English Version :
>
> Hello,
>
> I found a strange behavior in Ceph. This behavior is visible on Buckets
> (RGW) and pools (RDB).
> pools:
>
> ``
> root@:~# qemu-img info rbd:pool/kibana2
> image: rbd:pool/kibana2
> file format: raw
> virtual size: 30G (32212254720 bytes)
> disk size: unavailable
> Snapshot list:
> ID        TAG VM SIZE      DATE       VM   CLOCK
> snap2014-08-26-kibana2snap2014-08-26-kibana2 30G 1970-01-01 01:00:00
> 00:00:00.000
> snap2014-09-05-kibana2snap2014-09-05-kibana2 30G 1970-01-01 01:00:00
> 00:00:00.000
> ``
>
> As you can see the all dates are set to 1970-01-01 ?
>
> Here's the content of a JSON file in a bucket.
>
> ``
> {'bytes': 0, 'last_modified': '1970-01-01T00:00:00.000Z', 'hash': u'',
> 'name': 'bab34dad-531c-4609-ae5e-62129b43b181'}
> ```
>
> You can see this is the same for the "Last Modified" date.
>
> Do you have any ideas?
>
>
> French Version :
>
> Bonjour,
>
> J'ai un comportement anormal sur ceph. J'ai des problèmes sur les
> Buckets(RGW) et les pools(RDB).
>
> Pools :
>
> ``
> root@:~# qemu-img info rbd:pool/kibana2
> image: rbd:pool/kibana2
> file format: raw
> virtual size: 30G (32212254720 bytes)
> disk size: unavailable
> Snapshot list:
> ID        TAG VM SIZE      DATE       VM   CLOCK
> snap2014-08-26-kibana2snap2014-08-26-kibana2 30G 1970-01-01 01:00:00
> 00:00:00.000
> snap2014-09-05-kibana2snap2014-09-05-kibana2 30G 1970-01-01 01:00:00
> 00:00:00.000
> ``
>
> En effet la DATE est à 1970-01-01 ???
>
> Pour les Buckets voici le retour JSON d'un fichier dans un bucket :
>
> ``
> {'bytes': 0, 'last_modified': '1970-01-01T00:00:00.000Z', 'hash': u'',
> 'name': 'bab34dad-531c-4609-ae5e-62129b43b181'}
> ```
>
> Pareil le Last Modified est à 1970-01-01..
>
> Avez-vous des idées ?
>
> --
>
> Jimmy Goffaux
> _______________________________________________
> ceph-users mailing list
> ceph-users@lists.ceph.com
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to