On 2015-07-13 12:01, Gregory Farnum wrote:
On Mon, Jul 13, 2015 at 9:49 AM, Ilya Dryomov <idryo...@gmail.com> wrote:
On Fri, Jul 10, 2015 at 9:36 PM, Jan Pekař <jan.pe...@imatic.cz> wrote:
Hi all,

I think I found a bug in cephfs kernel client.
When I create directory in cephfs and set layout to

ceph.dir.layout="stripe_unit=1073741824 stripe_count=1
object_size=1073741824 pool=somepool"

attepmts to write larger file will cause kernel hung or reboot.
When I'm using cephfs client based on fuse, it works (but now I have some
issues with fuse and concurrent writes too, but it is not this kind of
problem).

Which kernel are you running?  What do you see in the dmesg when it
hangs?  What is the panic splat when it crashes?  How big is the
"larger" file that you are trying to write?
I'm running 4.0.3 kernel but it was the same with older ones.
Computer hangs, so I cannot display dmesg. I will try to catch it with remote syslog.
Larger file is about 500MB. Last time 300MB was ok.


I think object_size and stripe_unit 1073741824 is max value, or can I set it
higher?

Default values "stripe_unit=4194304 stripe_count=1 object_size=4194304"
works without problem on write.

My goal was not to split file between osd's each 4MB of its size but save it
in one piece.

This is generally not a very good idea - you have to consider the
distribution of objects across PGs and how your OSDs will be utilized.

Yeah. Beyond that, the OSDs will reject writes exceeding a certain
size (90MB by default). I'm not sure exactly what mismatch you're
running into here but I can think of several different ways a >1GB
write/single object could get stuck; it's just not a good idea.
-Greg
I'm using it this way from the beginning and with FUSE I had no problem with big files. Objects in my OSD has often 1GB and no problem with it.



--
============
Ing. Jan Pekař
jan.pe...@imatic.cz | +420603811737
----
Imatic | Jagellonská 14 | Praha 3 | 130 00
http://www.imatic.cz
============
--
_______________________________________________
ceph-users mailing list
ceph-users@lists.ceph.com
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to