I've noticed when I use large object sizes like 100M with rados bench write, I 
get 
rados -p data2 bench 60 write --no-cleanup -b 100M
 Maintaining 16 concurrent writes of 104857600 bytes for up to 60 seconds or 0 
objects
   sec Cur ops   started  finished  avg MB/s  cur MB/s  last lat   avg lat
     0       0         0         0         0         0         -         0
     1       3         3         0         0         0         -         0
     2       5         5         0         0         0         -         0
     3       8         8         0         0         0         -         0
     4      10        10         0         0         0         -         0
     5      13        13         0         0         0         -         0
     6      15        15         0         0         0         -         0
error during benchmark: -5
error 5: (5) Input/output error

An object_size of 32M works fine and the cluster seems otherwise fine.

Seems related to this issue 
http://lists.ceph.com/pipermail/ceph-users-ceph.com/2014-March/028288.html
But I didn't see a resolution for that.

Is there a timeout that is kicking in?

-- Tom Deneau

_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to