Alexis, would you please do some additional profiling, like checking
how exactly cpu time of problematic nod is used during such hang? Your
issue looks very simular to mine when OSDs producing large peaks of SY
cpu time(yet in my case heavy test may cause OSD processes to hang
forever).
On Wed, Ja
Hello,
And thanks for your answers.
I've limited the client to 1Gb/s, in order to remove the network inbalance.
So now, everybody is 1 Gb/s.
But... but... I still have troubles. I can give you some more details
of my test procedure :
1. Start a fresh cluster, without cephx, 2GB journals on SSD.
On Monday, January 14, 2013 08:51:57 Alexis GÜNST HORN wrote:
> At the end, the client mountpoint become unresponsive, and the only
> way is to force reboot.
I am going to throw this out there as I've seen something similar, but not
with ceph. Back in 2005-ish I was experimenting with ATA over E
Hello,
Thanks for your answer.
Both OSDs and client are CentOS 6.3 with 3.7.1 kernel.
And yes, the script creates empty loop devices of different sizes.
MDS & MON are on one of the 2 OSD hosts.
I already try to put them on a separate server.
I know that CephFS is not considered as stable yet, bu
Hi,
On 01/14/2013 08:51 AM, Alexis GÜNST HORN wrote:
Hello,
I've a 0.56.1 Ceph cluster up and running. RBD is working fine, but
i've some troubles with CephFS.
Here is my config :
- only 2 OSD nodes, with 10 disks each + SSD for journal.
- OSDs hosts are gigabit (public) + gigabit (private)
-
Hello,
I've a 0.56.1 Ceph cluster up and running. RBD is working fine, but
i've some troubles with CephFS.
Here is my config :
- only 2 OSD nodes, with 10 disks each + SSD for journal.
- OSDs hosts are gigabit (public) + gigabit (private)
- one client which is 10 gigabit
The client mount a ceph