Hi,

Thanks for the input.

I also have tons of "socket closed", I recall that this message is
harmless. Anyway Cephx is disable on my platform from the beginning...
Anyone to approve or disapprove my "scrub theory"?
--
Regards,
Sébastien Han.


On Wed, Jan 9, 2013 at 7:09 PM, Sylvain Munaut
<s.mun...@whatever-company.com> wrote:
> Just fyi, I also have growing memory on OSD, and I have the same logs:
>
> "libceph: osd4 172.20.11.32:6801 socket closed" in the RBD clients
>
>
> I traced that problem and correlated it to some cephx issue in the OSD
> some time ago in this thread
>
> http://www.mail-archive.com/ceph-devel@vger.kernel.org/msg10634.html
>
> but the thread kind of died without a solution ...
>
> Cheers,
>
>    Sylvain
--
To unsubscribe from this list: send the line "unsubscribe ceph-devel" in
the body of a message to majord...@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to