On Mon, Jun 26, 2017 at 07:12:31PM +0200, Massimiliano Cuttini wrote:

> >In your case (rbd-nbd) this error is harmless. You can avoid them
> >setting in ceph.conf, [client] section something like below:
> >
> >  admin socket = /var/run/ceph/$name.$pid.asok
> >
> >Also to make every rbd-nbd process to log to a separate file you can
> >set (in [client] section):
> >
> >  log file = /var/log/ceph/$name.$pid.log
> I need to create all the user in ceph cluster before use this.
> At the moment all the cluster was runnig with ceph admin keyring.
> However, this is not an issue, I  can rapidly deploy all user
> >needed.

I don't understand about this. I think just adding these parameters to
ceph.conf should work.

> 
> >>root     12610  0.0  0.2 1836768 11412 ?       Sl   Jun23   0:43 rbd-nbd 
> >>--nbds_max 64 map 
> >>RBD_XenStorage-51a45fd8-a4d1-4202-899c-00a0f81054cc/VHD-602b05be-395d-442e-bd68-7742deaf97bd
> >> --name client.admin
> >>root     17298  0.0  0.2 1644244 8420 ?        Sl   21:15   0:01 rbd-nbd 
> >>--nbds_max 64 map 
> >>RBD_XenStorage-51a45fd8-a4d1-4202-899c-00a0f81054cc/VHD-3e16395d-7dad-4680-a7ad-7f398da7fd9e
> >> --name client.admin
> >>root     18116  0.0  0.2 1570512 8428 ?        Sl   21:15   0:01 rbd-nbd 
> >>--nbds_max 64 map 
> >>RBD_XenStorage-51a45fd8-a4d1-4202-899c-00a0f81054cc/VHD-41a76fe7-c9ff-4082-adb4-43f3120a9106
> >> --name client.admin
> >>root     19063  0.1  1.3 2368252 54944 ?       Sl   21:15   0:10 rbd-nbd 
> >>--nbds_max 64 map 
> >>RBD_XenStorage-51a45fd8-a4d1-4202-899c-00a0f81054cc/VHD-6da2154e-06fd-4063-8af5-ae86ae61df50
> >> --name client.admin
> >>root     21007  0.0  0.2 1570512 8644 ?        Sl   21:15   0:01 rbd-nbd 
> >>--nbds_max 64 map 
> >>RBD_XenStorage-51a45fd8-a4d1-4202-899c-00a0f81054cc/VHD-c8aca7bd-1e37-4af4-b642-f267602e210f
> >> --name client.admin
> >>root     21226  0.0  0.2 1703640 8744 ?        Sl   21:15   0:01 rbd-nbd 
> >>--nbds_max 64 map 
> >>RBD_XenStorage-51a45fd8-a4d1-4202-899c-00a0f81054cc/VHD-cf2139ac-b1c4-404d-87da-db8f992a3e72
> >> --name client.admin
> >>root     21615  0.5  1.4 2368252 60256 ?       Sl   21:15   0:33 rbd-nbd 
> >>--nbds_max 64 map 
> >>RBD_XenStorage-51a45fd8-a4d1-4202-899c-00a0f81054cc/VHD-acb2a9b0-e98d-474e-aa42-ed4e5534ddbe
> >> --name client.admin
> >>root     21653  0.0  0.2 1703640 11100 ?       Sl   04:12   0:14 rbd-nbd 
> >>--nbds_max 64 map 
> >>RBD_XenStorage-51a45fd8-a4d1-4202-899c-00a0f81054cc/VHD-8631ab86-c85c-407b-9e15-bd86e830ba74
> >> --name client.admin
> >Do you observe the issue for all these volumes? I see many of them
> >were started recently (21:15) while other are older.
> Only some of them.
> But it's randomly.
> Some of old and some just plugged becomes unavailable to xen.

Do you mean by "unavailable" that image is corrupted or does it
reports IO errors? If this is the first case then it was corrupted
some time ago and we would need logs for that period to understand
what happened.

> >Don't you observe sporadic crashes/restarts of rbd-nbd processes? You
> >can associate a nbd device with rbd-nbd process (and rbd volume)
> >looking at /sys/block/nbd*/pid and ps output.
> I really don't know where to look for the rbd-nbd log.
> Can you point it out?

According to some of your previous messages rbd-nbd is writing to
/var/log/ceph/client.log:

> Under /var/log/ceph/client.log
> I see this error:
> 
> 2017-06-25 05:25:32.833202 7f658ff04e00  0 ceph version 10.2.7
> (50e863e0f4bc8f4b9e31156de690d765af245185), process rbd-nbd, pid 8524

You could look for errors in older log files if they are rotated.

-- 
Mykola Golub
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to