This is a note to let you know that I've just added the patch titled

    libceph: fix mutex coverage for ceph_con_close

to the 3.4-stable tree which can be found at:
    
http://www.kernel.org/git/?p=linux/kernel/git/stable/stable-queue.git;a=summary

The filename of the patch is:
     0086-libceph-fix-mutex-coverage-for-ceph_con_close.patch
and it can be found in the queue-3.4 subdirectory.

If you, or anyone else, feels it should not be added to the stable tree,
please let <[email protected]> know about it.


>From a61ddafc8e0b33876e676e2af4e22ebae25d6d63 Mon Sep 17 00:00:00 2001
From: Sage Weil <[email protected]>
Date: Mon, 30 Jul 2012 16:24:37 -0700
Subject: libceph: fix mutex coverage for ceph_con_close

From: Sage Weil <[email protected]>

(cherry picked from commit 8c50c817566dfa4581f82373aac39f3e608a7dc8)

Hold the mutex while twiddling all of the state bits to avoid possible
races.  While we're here, make not of why we cannot close the socket
directly.

Signed-off-by: Sage Weil <[email protected]>
Reviewed-by: Alex Elder <[email protected]>
Reviewed-by: Yehuda Sadeh <[email protected]>
Signed-off-by: Greg Kroah-Hartman <[email protected]>
---
 net/ceph/messenger.c |    8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

--- a/net/ceph/messenger.c
+++ b/net/ceph/messenger.c
@@ -503,6 +503,7 @@ static void reset_connection(struct ceph
  */
 void ceph_con_close(struct ceph_connection *con)
 {
+       mutex_lock(&con->mutex);
        dout("con_close %p peer %s\n", con,
             ceph_pr_addr(&con->peer_addr.in_addr));
        clear_bit(NEGOTIATING, &con->state);
@@ -515,11 +516,16 @@ void ceph_con_close(struct ceph_connecti
        clear_bit(KEEPALIVE_PENDING, &con->flags);
        clear_bit(WRITE_PENDING, &con->flags);
 
-       mutex_lock(&con->mutex);
        reset_connection(con);
        con->peer_global_seq = 0;
        cancel_delayed_work(&con->work);
        mutex_unlock(&con->mutex);
+
+       /*
+        * We cannot close the socket directly from here because the
+        * work threads use it without holding the mutex.  Instead, let
+        * con_work() do it.
+        */
        queue_con(con);
 }
 EXPORT_SYMBOL(ceph_con_close);


Patches currently in stable-queue which might be from [email protected] are

queue-3.4/0073-libceph-clear-CONNECTING-in-ceph_con_close.patch
queue-3.4/0020-ceph-ensure-auth-ops-are-defined-before-use.patch
queue-3.4/0025-ceph-add-auth-buf-in-prepare_write_connect.patch
queue-3.4/0021-ceph-have-get_authorizer-methods-return-pointers.patch
queue-3.4/0026-libceph-avoid-unregistering-osd-request-when-not-reg.patch
queue-3.4/0077-libceph-distinguish-two-phases-of-connect-sequence.patch
queue-3.4/0045-libceph-provide-osd-number-when-creating-osd.patch
queue-3.4/0059-libceph-transition-socket-state-prior-to-actual-conn.patch
queue-3.4/0084-libceph-prevent-the-race-of-incoming-work-during-tea.patch
queue-3.4/0005-crush-fix-memory-leak-when-destroying-tree-buckets.patch
queue-3.4/0002-crush-adjust-local-retry-threshold.patch
queue-3.4/0091-libceph-fix-fault-locking-close-socket-on-lossy-faul.patch
queue-3.4/0088-libceph-re-initialize-bio_iter-on-start-of-message-r.patch
queue-3.4/0023-ceph-return-pointer-from-prepare_connect_authorizer.patch
queue-3.4/0055-libceph-make-ceph_con_revoke_message-a-msg-op.patch
queue-3.4/0090-libceph-reset-connection-retry-on-successfully-negot.patch
queue-3.4/0054-libceph-make-ceph_con_revoke-a-msg-operation.patch
queue-3.4/0098-libceph-clean-up-con-flags.patch
queue-3.4/0093-libceph-move-ceph_con_send-closed-check-under-the-co.patch
queue-3.4/0066-libceph-move-init_bio_-functions-up.patch
queue-3.4/0018-ceph-define-ceph_auth_handshake-type.patch
queue-3.4/0063-libceph-encapsulate-out-message-data-setup.patch
queue-3.4/0076-libceph-separate-banner-and-connect-writes.patch
queue-3.4/0040-libceph-rename-socket-callbacks.patch
queue-3.4/0011-ceph-messenger-reset-connection-kvec-caller.patch
queue-3.4/0032-libceph-fix-messenger-retry.patch
queue-3.4/0070-libceph-don-t-change-socket-state-on-sock-event.patch
queue-3.4/0061-libceph-use-con-get-put-methods.patch
queue-3.4/0074-libceph-clear-NEGOTIATING-when-done.patch
queue-3.4/0019-ceph-messenger-reduce-args-to-create_authorizer.patch
queue-3.4/0041-libceph-rename-kvec_reset-and-kvec_add-functions.patch
queue-3.4/0047-libceph-embed-ceph-connection-structure-in-mon_clien.patch
queue-3.4/0029-libceph-use-con-get-put-ops-from-osd_client.patch
queue-3.4/0051-libceph-tweak-ceph_alloc_msg.patch
queue-3.4/0064-libceph-encapsulate-advancing-msg-page.patch
queue-3.4/0075-libceph-define-and-use-an-explicit-CONNECTED-state.patch
queue-3.4/0082-libceph-allow-sock-transition-from-CONNECTING-to-CLO.patch
queue-3.4/0015-ceph-messenger-check-prepare_write_connect-result.patch
queue-3.4/0003-crush-be-more-tolerant-of-nonsensical-crush-maps.patch
queue-3.4/0028-libceph-osd_client-don-t-drop-reply-reference-too-ea.patch
queue-3.4/0014-ceph-don-t-set-WRITE_PENDING-too-early.patch
queue-3.4/0049-libceph-init-monitor-connection-when-opening.patch
queue-3.4/0016-ceph-messenger-rework-prepare_connect_authorizer.patch
queue-3.4/0097-libceph-replace-connection-state-bits-with-states.patch
queue-3.4/0068-libceph-don-t-use-bio_iter-as-a-flag.patch
queue-3.4/0062-libceph-drop-ceph_con_get-put-helpers-and-nref-membe.patch
queue-3.4/0089-libceph-protect-ceph_con_open-with-mutex.patch
queue-3.4/0048-libceph-drop-connection-refcounting-for-mon_client.patch
queue-3.4/0031-libceph-flush-msgr-queue-during-mon_client-shutdown.patch
queue-3.4/0094-libceph-drop-gratuitous-socket-close-calls-in-con_wo.patch
queue-3.4/0013-ceph-drop-msgr-argument-from-prepare_write_connect.patch
queue-3.4/0080-libceph-set-peer-name-on-con_open-not-init.patch
queue-3.4/0043-libceph-start-separating-connection-flags-from-state.patch
queue-3.4/0046-libceph-set-CLOSED-state-bit-in-con_init.patch
queue-3.4/0085-libceph-report-socket-read-write-error-message.patch
queue-3.4/0083-libceph-initialize-msgpool-message-types.patch
queue-3.4/0092-libceph-move-msgr-clear_standby-under-con-mutex-prot.patch
queue-3.4/0095-libceph-close-socket-directly-from-ceph_con_close.patch
queue-3.4/0009-ceph-messenger-change-read_partial-to-take-end-arg.patch
queue-3.4/0096-libceph-drop-unnecessary-CLOSED-check-in-socket-stat.patch
queue-3.4/0017-ceph-messenger-check-return-from-get_authorizer.patch
queue-3.4/0086-libceph-fix-mutex-coverage-for-ceph_con_close.patch
queue-3.4/0001-crush-clean-up-types-const-ness.patch
queue-3.4/0072-libceph-don-t-touch-con-state-in-con_close_socket.patch
queue-3.4/0037-ceph-check-PG_Private-flag-before-accessing-page-pri.patch
queue-3.4/0044-libceph-start-tracking-connection-socket-state.patch
queue-3.4/0099-libceph-clear-all-flags-on-con_close.patch
queue-3.4/0071-libceph-just-set-SOCK_CLOSED-when-state-changes.patch
queue-3.4/0022-ceph-use-info-returned-by-get_authorizer.patch
queue-3.4/0065-libceph-don-t-mark-footer-complete-before-it-is.patch
queue-3.4/0027-libceph-fix-pg_temp-updates.patch
queue-3.4/0079-libceph-add-some-fine-ASCII-art.patch
queue-3.4/0008-ceph-messenger-update-to-in-read_partial-caller.patch
queue-3.4/0007-ceph-messenger-use-read_partial-in-read_partial_mess.patch
queue-3.4/0078-libceph-small-changes-to-messenger.c.patch
queue-3.4/0010-libceph-don-t-reset-kvec-in-prepare_write_banner.patch
queue-3.4/0087-libceph-resubmit-linger-ops-when-pg-mapping-changes.patch
queue-3.4/0052-libceph-have-messages-point-to-their-connection.patch
queue-3.4/0067-libceph-move-init-of-bio_iter.patch
queue-3.4/0081-libceph-initialize-mon_client-con-only-once.patch
queue-3.4/0050-libceph-fully-initialize-connection-in-con_init.patch
queue-3.4/0053-libceph-have-messages-take-a-connection-reference.patch
queue-3.4/0012-ceph-messenger-send-banner-in-process_connect.patch
queue-3.4/0024-ceph-rename-prepare_connect_authorizer.patch
queue-3.4/0042-libceph-embed-ceph-messenger-structure-in-ceph_clien.patch
queue-3.4/0069-libceph-SOCK_CLOSED-is-a-flag-not-a-state.patch
queue-3.4/0004-crush-fix-tree-node-weight-lookup.patch
--
To unsubscribe from this list: send the line "unsubscribe stable" in
the body of a message to [email protected]
More majordomo info at  http://vger.kernel.org/majordomo-info.html

Reply via email to