The critical bit in your logs below are the lines
Oct  1 15:04:08 worker1004 kernel: libceph: reset on mds0
Oct  1 15:04:08 worker1004 kernel: ceph: mds0 closed our session
Oct  1 15:04:08 worker1004 kernel: ceph: mds0 reconnect start
...
Oct  1 15:04:08 worker1004 kernel: ceph: mds0 reconnect denied

I couldn't tell you why the kernel client is facing disconnects that it
doesn't handle more often than the userspace client is, perhaps it (or at
least this kernel's version) isn't subscribing to mdsmap updates or
handling them quickly enough.
But that sequence means that the mount is busted and can't recover itself.
-Greg


On Mon, Oct 1, 2018 at 1:41 PM Andras Pataki <[email protected]>
wrote:

> Unfortunately the CentOS kernel (3.10.0-862.14.4.el7.x86_64) has issues
> as well.  Different ones, but the nodes end up with an unusable mount in
> an hour or two.  Here are some syslogs:
>
> Oct  1 11:50:28 worker1004 kernel: INFO: task fio:29007 blocked for more
> than 120 seconds.
> Oct  1 11:50:28 worker1004 kernel: "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Oct  1 11:50:28 worker1004 kernel: fio             D
> ffff996d86e92f70     0 29007  28970 0x00000000
> Oct  1 11:50:28 worker1004 kernel: Call Trace:
> Oct  1 11:50:28 worker1004 kernel: [<ffffffff8c316ec0>] ?
> bit_wait+0x50/0x50
> Oct  1 11:50:28 worker1004 kernel: [<ffffffff8c318f39>] schedule+0x29/0x70
> Oct  1 11:50:28 worker1004 kernel: [<ffffffff8c3168a9>]
> schedule_timeout+0x239/0x2c0
> Oct  1 11:50:28 worker1004 kernel: [<ffffffff8bcfa982>] ?
> ktime_get_ts64+0x52/0xf0
> Oct  1 11:50:28 worker1004 kernel: [<ffffffff8c316ec0>] ?
> bit_wait+0x50/0x50
> Oct  1 11:50:28 worker1004 kernel: [<ffffffff8c31844d>]
> io_schedule_timeout+0xad/0x130
> Oct  1 11:50:28 worker1004 kernel: [<ffffffff8c3184e8>]
> io_schedule+0x18/0x20
> Oct  1 11:50:28 worker1004 kernel: [<ffffffff8c316ed1>]
> bit_wait_io+0x11/0x50
> Oct  1 11:50:28 worker1004 kernel: [<ffffffff8c316a81>]
> __wait_on_bit_lock+0x61/0xc0
> Oct  1 11:50:28 worker1004 kernel: [<ffffffff8bd96684>]
> __lock_page+0x74/0x90
> Oct  1 11:50:28 worker1004 kernel: [<ffffffff8bcbefd0>] ?
> wake_bit_function+0x40/0x40
> Oct  1 11:50:28 worker1004 kernel: [<ffffffff8bd971a4>]
> __find_lock_page+0x54/0x70
> Oct  1 11:50:28 worker1004 kernel: [<ffffffff8bd97ed5>]
> grab_cache_page_write_begin+0x55/0xc0
> Oct  1 11:50:28 worker1004 kernel: [<ffffffffc12798e3>]
> ceph_write_begin+0x43/0xe0 [ceph]
> Oct  1 11:50:28 worker1004 kernel: [<ffffffff8bd96ac4>]
> generic_file_buffered_write+0x124/0x2c0
> Oct  1 11:50:28 worker1004 kernel: [<ffffffffc127526e>]
> ceph_aio_write+0xa3e/0xcb0 [ceph]
> Oct  1 11:50:28 worker1004 kernel: [<ffffffff8bdc81be>] ?
> do_numa_page+0x1be/0x250
> Oct  1 11:50:28 worker1004 kernel: [<ffffffff8bdc8566>] ?
> handle_pte_fault+0x316/0xd10
> Oct  1 11:50:28 worker1004 kernel: [<ffffffff8be731a3>] ?
> aio_read_events+0x1f3/0x2e0
> Oct  1 11:50:28 worker1004 kernel: [<ffffffff8bed5437>] ?
> security_file_permission+0x27/0xa0
> Oct  1 11:50:28 worker1004 kernel: [<ffffffffc1274830>] ?
> ceph_direct_read_write+0xcd0/0xcd0 [ceph]
> Oct  1 11:50:28 worker1004 kernel: [<ffffffff8be74783>]
> do_io_submit+0x3c3/0x870
> Oct  1 11:50:28 worker1004 kernel: [<ffffffff8be74c40>]
> SyS_io_submit+0x10/0x20
> Oct  1 11:50:28 worker1004 kernel: [<ffffffff8c32579b>]
> system_call_fastpath+0x22/0x27
> Oct  1 11:52:28 worker1004 kernel: INFO: task fio:29007 blocked for more
> than 120 seconds.
> Oct  1 11:52:28 worker1004 kernel: "echo 0 >
> /proc/sys/kernel/hung_task_timeout_secs" disables this message.
> Oct  1 11:52:28 worker1004 kernel: fio             D
> ffff996d86e92f70     0 29007  28970 0x00000000
> Oct  1 11:52:28 worker1004 kernel: Call Trace:
> Oct  1 11:52:28 worker1004 kernel: [<ffffffff8c316ec0>] ?
> bit_wait+0x50/0x50
> Oct  1 11:52:28 worker1004 kernel: [<ffffffff8c318f39>] schedule+0x29/0x70
> Oct  1 11:52:28 worker1004 kernel: [<ffffffff8c3168a9>]
> schedule_timeout+0x239/0x2c0
> Oct  1 11:52:28 worker1004 kernel: [<ffffffff8bcfa982>] ?
> ktime_get_ts64+0x52/0xf0
> Oct  1 11:52:28 worker1004 kernel: [<ffffffff8c316ec0>] ?
> bit_wait+0x50/0x50
> Oct  1 11:52:28 worker1004 kernel: [<ffffffff8c31844d>]
> io_schedule_timeout+0xad/0x130
> Oct  1 11:52:28 worker1004 kernel: [<ffffffff8c3184e8>]
> io_schedule+0x18/0x20
> Oct  1 11:52:28 worker1004 kernel: [<ffffffff8c316ed1>]
> bit_wait_io+0x11/0x50
> Oct  1 11:52:28 worker1004 kernel: [<ffffffff8c316a81>]
> __wait_on_bit_lock+0x61/0xc0
> Oct  1 11:52:28 worker1004 kernel: [<ffffffff8bd96684>]
> __lock_page+0x74/0x90
> Oct  1 11:52:28 worker1004 kernel: [<ffffffff8bcbefd0>] ?
> wake_bit_function+0x40/0x40
> Oct  1 11:52:28 worker1004 kernel: [<ffffffff8bd971a4>]
> __find_lock_page+0x54/0x70
> Oct  1 11:52:28 worker1004 kernel: [<ffffffff8bd97ed5>]
> grab_cache_page_write_begin+0x55/0xc0
> Oct  1 11:52:28 worker1004 kernel: [<ffffffffc12798e3>]
> ceph_write_begin+0x43/0xe0 [ceph]
> Oct  1 11:52:28 worker1004 kernel: [<ffffffff8bd96ac4>]
> generic_file_buffered_write+0x124/0x2c0
> Oct  1 11:52:28 worker1004 kernel: [<ffffffffc127526e>]
> ceph_aio_write+0xa3e/0xcb0 [ceph]
> Oct  1 11:52:28 worker1004 kernel: [<ffffffff8bdc81be>] ?
> do_numa_page+0x1be/0x250
> Oct  1 11:52:28 worker1004 kernel: [<ffffffff8bdc8566>] ?
> handle_pte_fault+0x316/0xd10
> Oct  1 11:52:28 worker1004 kernel: [<ffffffff8be731a3>] ?
> aio_read_events+0x1f3/0x2e0
> Oct  1 11:52:28 worker1004 kernel: [<ffffffff8bed5437>] ?
> security_file_permission+0x27/0xa0
> Oct  1 11:52:28 worker1004 kernel: [<ffffffffc1274830>] ?
> ceph_direct_read_write+0xcd0/0xcd0 [ceph]
> Oct  1 11:52:28 worker1004 kernel: [<ffffffff8be74783>]
> do_io_submit+0x3c3/0x870
> Oct  1 11:52:28 worker1004 kernel: [<ffffffff8be74c40>]
> SyS_io_submit+0x10/0x20
> Oct  1 11:52:28 worker1004 kernel: [<ffffffff8c32579b>]
> system_call_fastpath+0x22/0x27
>
> Oct  1 15:04:08 worker1004 kernel: libceph: reset on mds0
> Oct  1 15:04:08 worker1004 kernel: ceph: mds0 closed our session
> Oct  1 15:04:08 worker1004 kernel: ceph: mds0 reconnect start
> Oct  1 15:04:08 worker1004 kernel: libceph: osd182 10.128.150.155:6976
> socket closed (con state OPEN)
> Oct  1 15:04:08 worker1004 kernel: libceph: osd548 10.128.150.171:6936
> socket closed (con state OPEN)
> Oct  1 15:04:08 worker1004 kernel: libceph: osd59 10.128.150.154:6918
> socket closed (con state OPEN)
> Oct  1 15:04:08 worker1004 kernel: ceph: mds0 reconnect denied
> Oct  1 15:04:08 worker1004 kernel: ceph:  dropping dirty+flushing Fw
> state for ffff997ff05193b0 1099516450605
> Oct  1 15:04:08 worker1004 kernel: ceph:  dropping dirty+flushing Fw
> state for ffff997ff0519930 1099516450607
> Oct  1 15:04:08 worker1004 kernel: ceph:  dropping dirty+flushing Fw
> state for ffff997ff051a430 1099516450606
> Oct  1 15:04:08 worker1004 kernel: ceph:  dropping dirty+flushing Fw
> state for ffff997ff0519eb0 1099516450608
> Oct  1 15:04:08 worker1004 kernel: libceph: osd356 10.128.150.31:6844
> socket closed (con state OPEN)
> Oct  1 15:04:08 worker1004 kernel: libceph: osd120 10.128.150.154:6885
> socket closed (con state OPEN)
> Oct  1 15:04:08 worker1004 kernel: libceph: mds0 10.130.134.4:6800
> socket closed (con state NEGOTIATING)
> Oct  1 15:04:08 worker1004 kernel: libceph: osd151 10.128.150.155:6808
> socket closed (con state OPEN)
> Oct  1 15:04:08 worker1004 kernel: libceph: osd129 10.128.150.154:6845
> socket closed (con state OPEN)
> Oct  1 15:04:08 worker1004 kernel: libceph: osd363 10.128.150.31:6872
> socket closed (con state OPEN)
> Oct  1 15:04:08 worker1004 kernel: libceph: osd712 10.128.150.177:6908
> socket closed (con state OPEN)
> Oct  1 15:04:08 worker1004 kernel: libceph: osd592 10.128.150.174:6917
> socket closed (con state OPEN)
> Oct  1 15:04:08 worker1004 kernel: libceph: osd866 10.128.150.42:6976
> socket error on write
> Oct  1 15:04:08 worker1004 kernel: libceph: osd395 10.128.150.158:6868
> socket closed (con state OPEN)
> Oct  1 15:04:08 worker1004 kernel: libceph: osd159 10.128.150.155:6952
> socket closed (con state OPEN)
> Oct  1 15:04:08 worker1004 kernel: libceph: osd901 10.128.150.43:6936
> socket closed (con state OPEN)
> Oct  1 15:04:08 worker1004 kernel: libceph: osd658 10.128.150.176:6946
> socket closed (con state OPEN)
> Oct  1 15:04:08 worker1004 kernel: libceph: osd422 10.128.150.159:6859
> socket closed (con state OPEN)
> Oct  1 15:04:08 worker1004 kernel: libceph: osd718 10.128.150.177:6928
> socket error on read
> and similar.
>
> These nodes have been rock solid stable with ceph-fuse 12.2.7.  They
> definitely have no known networking issues, and the ceph cluster was
> healthy during this time.  In the MDS logs there are no entries around
> the problematic times either.  And after all this, the mount is unusable:
>
> [root@worker1004 ~]# ls -l /mnt/cephtest
> ls: cannot access /mnt/cephtest: Permission denied
>
> Andras
>
>
> On 10/1/18 3:02 PM, Andras Pataki wrote:
> > These hangs happen during random I/O fio benchmark loads.  Something
> > like 4 or 8 fio processes doing random reads/writes to distinct large
> > files (to ensure there is no caching possible).  This is all on CentOS
> > 7.4 nodes.  Same (and even tougher) tests run without any problems
> > with ceph-fuse.  We do have jobs that do heavy parallel I/O (MPI-IO,
> > HDF5 via MPI-IO, etc.) - so running 8 parallel random I/O generating
> > processes on nodes with 28 cores and plenty of RAM (256GB - 512GB)
> > should not be excessive.
> >
> > I am going to test the latest CentOS kernel next (the one you are
> > referencing).  The RedHat/CentOS kernels are not "old kernel clients"
> > - they contains various backports of hundreds of patches to all kinds
> > of subsystems of Linux.  What is unclear there is exactly what ceph
> > client RedHat is backporting to their kernels. Any pointers there
> > would be helpful.
> >
> > Andras
> >
> >
> > On 10/1/18 2:26 PM, Marc Roos wrote:
> >>   How do you test this? I have had no issues under "normal load" with an
> >> old kernel client and a stable os.
> >>
> >> CentOS Linux release 7.5.1804 (Core)
> >> Linux c04 3.10.0-862.11.6.el7.x86_64 #1 SMP Tue Aug 14 21:49:04 UTC 2018
> >> x86_64 x86_64 x86_64 GNU/Linux
> >>
> >>
> >>
> >>
> >>
> >> -----Original Message-----
> >> From: Andras Pataki [mailto:[email protected]]
> >> Sent: maandag 1 oktober 2018 20:10
> >> To: ceph-users
> >> Subject: [ceph-users] cephfs kernel client stability
> >>
> >> We have so far been using ceph-fuse for mounting cephfs, but the small
> >> file performance of ceph-fuse is often problematic.  We've been testing
> >> the kernel client, and have seen some pretty bad crashes/hangs.
> >>
> >> What is the policy on fixes to the kernel client?  Is only the latest
> >> stable kernel updated (4.18.x nowadays), or are fixes backported to LTS
> >> kernels also (like 4.14.x or 4.9.x for example)? I've seen various
> >> threads that certain newer features require pretty new kernels - but I'm
> >> wondering whether newer kernels are also required for better stability -
> >> or - in general, where the kernel client stability stands nowadays.
> >>
> >> Here is an example of kernel hang with 4.14.67.  On heavy loads the
> >> machine isn't even pingable.
> >>
> >> Sep 29 21:10:16 worker1004 kernel: INFO: rcu_sched self-detected stall
> >> on CPU Sep 29 21:10:16 worker1004 kernel: #0111-...: (1 GPs behind)
> >> idle=bee/140000000000001/0 softirq=21319/21319 fqs=7499 Sep 29 21:10:16
> >> worker1004 kernel: #011 (t=15000 jiffies g=13989 c=13988
> >> q=8334)
> >> Sep 29 21:10:16 worker1004 kernel: NMI backtrace for cpu 1 Sep 29
> >> 21:10:16 worker1004 kernel: CPU: 1 PID: 19436 Comm: kworker/1:42
> >> Tainted: P        W  O    4.14.67 #1
> >> Sep 29 21:10:16 worker1004 kernel: Hardware name: Dell Inc. PowerEdge
> >> C6320/082F9M, BIOS 2.6.0 10/27/2017 Sep 29 21:10:16 worker1004 kernel:
> >> Workqueue: ceph-msgr ceph_con_workfn [libceph] Sep 29 21:10:16
> >> worker1004 kernel: Call Trace:
> >> Sep 29 21:10:16 worker1004 kernel: <IRQ> Sep 29 21:10:16 worker1004
> >> kernel: dump_stack+0x46/0x5f Sep 29 21:10:16 worker1004 kernel:
> >> nmi_cpu_backtrace+0xba/0xc0 Sep 29 21:10:16 worker1004 kernel: ?
> >> irq_force_complete_move+0xd0/0xd0 Sep 29 21:10:16 worker1004 kernel:
> >> nmi_trigger_cpumask_backtrace+0x8a/0xc0
> >> Sep 29 21:10:16 worker1004 kernel: rcu_dump_cpu_stacks+0x81/0xb1 Sep 29
> >> 21:10:16 worker1004 kernel: rcu_check_callbacks+0x642/0x790 Sep 29
> >> 21:10:16 worker1004 kernel: ? update_wall_time+0x26d/0x6e0 Sep 29
> >> 21:10:16 worker1004 kernel: update_process_times+0x23/0x50 Sep 29
> >> 21:10:16 worker1004 kernel: tick_sched_timer+0x2f/0x60 Sep 29 21:10:16
> >> worker1004 kernel: __hrtimer_run_queues+0xa3/0xf0 Sep 29 21:10:16
> >> worker1004 kernel: hrtimer_interrupt+0x94/0x170 Sep 29 21:10:16
> >> worker1004 kernel: smp_apic_timer_interrupt+0x4c/0x90
> >> Sep 29 21:10:16 worker1004 kernel: apic_timer_interrupt+0x84/0x90 Sep 29
> >> 21:10:16 worker1004 kernel: </IRQ> Sep 29 21:10:16 worker1004 kernel:
> >> RIP: 0010:crush_hash32_3+0x1e5/0x270 [libceph] Sep 29 21:10:16
> >> worker1004 kernel: RSP: 0018:ffffc9000fdff5d8 EFLAGS:
> >> 00000a97 ORIG_RAX: ffffffffffffff10
> >> Sep 29 21:10:16 worker1004 kernel: RAX: 0000000006962033 RBX:
> >> ffff883f6e7173c0 RCX: 00000000dcdcc373
> >> Sep 29 21:10:16 worker1004 kernel: RDX: 00000000bd5425ca RSI:
> >> 000000008a8b0b56 RDI: 00000000b1983b87
> >> Sep 29 21:10:16 worker1004 kernel: RBP: 0000000000000023 R08:
> >> 00000000bd5425ca R09: 00000000137904e9
> >> Sep 29 21:10:16 worker1004 kernel: R10: 0000000000000000 R11:
> >> 0000000000000002 R12: 00000000b0f29f21
> >> Sep 29 21:10:16 worker1004 kernel: R13: 000000000000000c R14:
> >> 00000000f0ae0000 R15: 0000000000000023
> >> Sep 29 21:10:16 worker1004 kernel: crush_bucket_choose+0x2ad/0x340
> >> [libceph] Sep 29 21:10:16 worker1004 kernel:
> >> crush_choose_firstn+0x1b0/0x4c0 [libceph] Sep 29 21:10:16 worker1004
> >> kernel: crush_choose_firstn+0x48d/0x4c0 [libceph] Sep 29 21:10:16
> >> worker1004 kernel: crush_do_rule+0x28c/0x5a0 [libceph] Sep 29 21:10:16
> >> worker1004 kernel: ceph_pg_to_up_acting_osds+0x459/0x850
> >> [libceph]
> >> Sep 29 21:10:16 worker1004 kernel: calc_target+0x213/0x520 [libceph] Sep
> >> 29 21:10:16 worker1004 kernel: ? ixgbe_xmit_frame_ring+0x362/0xe80
> >> [ixgbe] Sep 29 21:10:16 worker1004 kernel: ? put_prev_entity+0x27/0x620
> >> Sep 29 21:10:16 worker1004 kernel: ? pick_next_task_fair+0x1c7/0x520 Sep
> >> 29 21:10:16 worker1004 kernel:
> >> scan_requests.constprop.55+0x16f/0x280 [libceph] Sep 29 21:10:16
> >> worker1004 kernel: handle_one_map+0x175/0x200 [libceph] Sep 29 21:10:16
> >> worker1004 kernel: ceph_osdc_handle_map+0x390/0x850 [libceph] Sep 29
> >> 21:10:16 worker1004 kernel: ? ceph_x_encrypt+0x46/0x70 [libceph] Sep 29
> >> 21:10:16 worker1004 kernel: dispatch+0x2ef/0xba0 [libceph] Sep 29
> >> 21:10:16 worker1004 kernel: ? read_partial_message+0x215/0x880 [libceph]
> >> Sep 29 21:10:16 worker1004 kernel: ? inet_recvmsg+0x45/0xb0 Sep 29
> >> 21:10:16 worker1004 kernel: try_read+0x6f8/0x11b0 [libceph] Sep 29
> >> 21:10:16 worker1004 kernel: ? sched_clock_cpu+0xc/0xa0 Sep 29 21:10:16
> >> worker1004 kernel: ? put_prev_entity+0x27/0x620 Sep 29 21:10:16
> >> worker1004 kernel: ? pick_next_task_fair+0x415/0x520 Sep 29 21:10:16
> >> worker1004 kernel: ceph_con_workfn+0x9d/0x5a0 [libceph] Sep 29 21:10:16
> >> worker1004 kernel: process_one_work+0x127/0x290 Sep 29 21:10:16
> >> worker1004 kernel: worker_thread+0x3f/0x3b0 Sep 29 21:10:16 worker1004
> >> kernel: kthread+0xf2/0x130 Sep 29 21:10:16 worker1004 kernel: ?
> >> process_one_work+0x290/0x290 Sep 29 21:10:16 worker1004 kernel: ?
> >> __kthread_parkme+0x90/0x90 Sep 29 21:10:16 worker1004 kernel:
> >> ret_from_fork+0x1f/0x30
> >>
> >> Andras
> >>
> >> _______________________________________________
> >> ceph-users mailing list
> >> [email protected]
> >> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
> >>
> >>
> _______________________________________________
> ceph-users mailing list
> [email protected]
> http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com
>
_______________________________________________
ceph-users mailing list
[email protected]
http://lists.ceph.com/listinfo.cgi/ceph-users-ceph.com

Reply via email to