I actually just wrote this exact patch, moments ago. But you deserve
credit, I'll use yours :).
-corey
On 06/09/2016 11:31 PM, Junichi Nomura wrote:
> Commit 7ea0ed2b5be8 ("ipmi: Make the message handler easier to use for
> SMI interfaces") changed handle_new_recv_msgs() to call handle_one_recv_msg()
> for a smi_msg while the smi_msg is still connected to waiting_rcv_msgs list.
> That could lead to following list corruption problems:
>
> 1) low-level function treats smi_msg as not connected to list
>
> handle_one_recv_msg() could end up calling smi_send(), which
> assumes the msg is not connected to list.
>
> For example, the following sequence could corrupt list by
> doing list_add_tail() for the entry still connected to other list.
>
> handle_new_recv_msgs()
> msg = list_entry(waiting_rcv_msgs)
> handle_one_recv_msg(msg)
> handle_ipmb_get_msg_cmd(msg)
> smi_send(msg)
> spin_lock(xmit_msgs_lock)
> list_add_tail(msg)
> spin_unlock(xmit_msgs_lock)
>
> 2) race between multiple handle_new_recv_msgs() instances
>
> handle_new_recv_msgs() once releases waiting_rcv_msgs_lock before calling
> handle_one_recv_msg() then retakes the lock and list_del() it.
>
> If others call handle_new_recv_msgs() during the window shown below
> list_del() will be done twice for the same smi_msg.
>
> handle_new_recv_msgs()
> spin_lock(waiting_rcv_msgs_lock)
> msg = list_entry(waiting_rcv_msgs)
> spin_unlock(waiting_rcv_msgs_lock)
> |
> | handle_one_recv_msg(msg)
> |
> spin_lock(waiting_rcv_msgs_lock)
> list_del(msg)
> spin_unlock(waiting_rcv_msgs_lock)
>
> Fixes: 7ea0ed2b5be8 ("ipmi: Make the message handler easier to use for SMI
> interfaces")
> Signed-off-by: Jun'ichi Nomura <[email protected]>
>
> diff --git a/drivers/char/ipmi/ipmi_msghandler.c
> b/drivers/char/ipmi/ipmi_msghandler.c
> index 94fb407..94e4a88 100644
> --- a/drivers/char/ipmi/ipmi_msghandler.c
> +++ b/drivers/char/ipmi/ipmi_msghandler.c
> @@ -3820,6 +3820,7 @@ static void handle_new_recv_msgs(ipmi_smi_t intf)
> while (!list_empty(&intf->waiting_rcv_msgs)) {
> smi_msg = list_entry(intf->waiting_rcv_msgs.next,
> struct ipmi_smi_msg, link);
> + list_del(&smi_msg->link);
> if (!run_to_completion)
> spin_unlock_irqrestore(&intf->waiting_rcv_msgs_lock,
> flags);
> @@ -3831,9 +3832,9 @@ static void handle_new_recv_msgs(ipmi_smi_t intf)
> * To preserve message order, quit if we
> * can't handle a message.
> */
> + list_add(&smi_msg->link, &intf->waiting_rcv_msgs);
> break;
> } else {
> - list_del(&smi_msg->link);
> if (rv == 0)
> /* Message handled */
> ipmi_free_smi_msg(smi_msg);
------------------------------------------------------------------------------
What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic
patterns at an interface-level. Reveals which users, apps, and protocols are
consuming the most bandwidth. Provides multi-vendor support for NetFlow,
J-Flow, sFlow and other flows. Make informed decisions using capacity
planning reports. https://ad.doubleclick.net/ddm/clk/305295220;132659582;e
_______________________________________________
Openipmi-developer mailing list
[email protected]
https://lists.sourceforge.net/lists/listinfo/openipmi-developer