Commit:     a0a74e45057cc3138c29173e7b0b3db8b30939ae
Parent:     a43e6bd1be17573b4f9489190d440677bcb300f6
Author:     Jesper Juhl <[EMAIL PROTECTED]>
AuthorDate: Thu Aug 9 20:47:15 2007 +0200
Committer:  James Bottomley <[EMAIL PROTECTED]>
CommitDate: Fri Oct 12 14:40:03 2007 -0400

    [SCSI] lpfc: fix potential overflow of hbqs array
    The Coverity checker noticed that we may overrun a statically allocated
    array in drivers/scsi/lpfc/lpfc_sli.c::lpfc_sli_hbqbuf_find().
    The case is this; In 'struct lpfc_hba' we have
        #define LPFC_MAX_HBQS  4
        struct lpfc_hba {
                struct hbq_s hbqs[LPFC_MAX_HBQS];
    But then in lpfc_sli_hbqbuf_find() we have this code
        hbqno = tag >> 16;
        if (hbqno > LPFC_MAX_HBQS)
                return NULL;
    if 'hbqno' ends up as exactely 4, then we won't return, and then this
        list_for_each_entry(d_buf, &phba->hbqs[hbqno].hbq_buffer_list, list) {
    will cause an overflow of the statically allocated array at index 4,
    since the valid indices are only 0-3.
    I propose this patch, that simply changes the 'hbqno > LPFC_MAX_HBQS'
    into 'hbqno >= LPFC_MAX_HBQS' as a possible fix.
    Signed-off-by: Jesper Juhl <[EMAIL PROTECTED]>
    Acked-by: James Smart <[EMAIL PROTECTED]>
    Signed-off-by: James Bottomley <[EMAIL PROTECTED]>
 drivers/scsi/lpfc/lpfc_sli.c |    2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/drivers/scsi/lpfc/lpfc_sli.c b/drivers/scsi/lpfc/lpfc_sli.c
index ce5ff2b..e5337ad 100644
--- a/drivers/scsi/lpfc/lpfc_sli.c
+++ b/drivers/scsi/lpfc/lpfc_sli.c
@@ -675,7 +675,7 @@ lpfc_sli_hbqbuf_find(struct lpfc_hba *phba, uint32_t tag)
        uint32_t hbqno;
        hbqno = tag >> 16;
-       if (hbqno > LPFC_MAX_HBQS)
+       if (hbqno >= LPFC_MAX_HBQS)
                return NULL;
        list_for_each_entry(d_buf, &phba->hbqs[hbqno].hbq_buffer_list, list) {
To unsubscribe from this list: send the line "unsubscribe git-commits-head" in
the body of a message to [EMAIL PROTECTED]
More majordomo info at

Reply via email to