On 10/08/2016 02:57 PM, Laurence Oberman wrote:
Hello

This has been a tough problem to chase down but was finally reproduced.
This issue is apparent on RHEL kernels and upstream so justified reporting here.

Its out there and some may not be aware its even happening other than very slow
> performance using ixgbe and software FCOE on large configurations.

Upstream Kernel used for reproducing is 4.8.0

I/O performance was noted to be very impacted on a large NUMA test system (64 
CPUS 4 NUMA nodes) running the software fcoe stack with Intel ixgbe interfaces.
After capturing blktraces we saw for every I/O there was at least one 
blk_requeue_request and sometimes hundreds or more.
This resulted in IOPS rates being marginal at best with queuing and high wait 
times.
After narrowing this down with systemtap and trace-cmd we added further debug 
and it was apparent this was dues to SCSI_MLQUEUE_HOST_BUSY being returned.
So I/O passes but very slowly as it constantly having to be requeued.

The identical configuration in our lab with a single NUMA node and 4 CPUS does 
not see this issue at all.
The same large system that reproduces this was booted with numa=off and still 
sees the issue.

Have you tested with my FCoE fixes?
I've done quite some fixes for libfc/fcoe, and it would be nice to see how the patches behave with this setup.

The flow is as follows:

From with fc_queuecommand
          fc_fcp_pkt_send() calls fc_fcp_cmd_send() calls tt.exch_seq_send() 
which calls fc_exch_seq_send

this fails and returns NULL in fc_exch_alloc() as the list traveral never 
creates a match.

static struct fc_seq *fc_exch_seq_send(struct fc_lport *lport,
                                       struct fc_frame *fp,
                                       void (*resp)(struct fc_seq *,
                                                    struct fc_frame *fp,
                                                    void *arg),
                                       void (*destructor)(struct fc_seq *,
                                                          void *),
                                       void *arg, u32 timer_msec)
{
        struct fc_exch *ep;
        struct fc_seq *sp = NULL;
        struct fc_frame_header *fh;
        struct fc_fcp_pkt *fsp = NULL;
        int rc = 1;

        ep = fc_exch_alloc(lport, fp);     ***** Called Here and fails
        if (!ep) {
                fc_frame_free(fp);
                printk("RHDEBUG: In fc_exch_seq_send returned NULL because !ep with 
ep = %p\n",ep);
                return NULL;
        }
..
..
]


 fc_exch_alloc() - Allocate an exchange from an EM on a
 *      /**
 *           local port's list of EMs.
 * @lport: The local port that will own the exchange
 * @fp:    The FC frame that the exchange will be for
 *
 * This function walks the list of exchange manager(EM)
 * anchors to select an EM for a new exchange allocation. The
 * EM is selected when a NULL match function pointer is encountered
 * or when a call to a match function returns true.
 */
static inline struct fc_exch *fc_exch_alloc(struct fc_lport *lport,
                                            struct fc_frame *fp)
{
        struct fc_exch_mgr_anchor *ema;

        list_for_each_entry(ema, &lport->ema_list, ema_list)
                if (!ema->match || ema->match(fp))
                        return fc_exch_em_alloc(lport, ema->mp);
        return NULL;                                 ***** Never matches so 
returns NULL
}


RHDEBUG: In fc_exch_seq_send returned NULL because !ep with ep = (null)
RHDEBUG: rc -1 with !seq = (null) after calling tt.exch_seq_send  within 
fc_fcp_cmd_send
RHDEBUG: rc non zero in :unlock within fc_fcp_cmd_send = -1
RHDEBUG: In fc_fcp_pkt_send, we returned from  rc = lport->tt.fcp_cmd_send with 
rc = -1

RHDEBUG: We hit SCSI_MLQUEUE_HOST_BUSY in fc_queuecommand with rval in 
fc_fcp_pkt_send=-1

I am trying to get my head around why a large multi-node system sees this issue 
even with NUMA disabled.
Has anybody seen this or is aware of this with configurations (using 
fc_queuecommand)

I am continuing to add debug to narrow this down.

You might actually be hitting a limitation in the exchange manager code.
The libfc exchange manager tries to be really clever and will assign a per-cpu exchange manager (probably to increase locality). However, we only have a limited number of exchanges, so on large systems we might actually run into a exchange starvation problem, where we have in theory enough free exchanges, but none for the submitting cpu.

(Personally, the exchange manager code is in urgent need of reworking.
It should be replaced by the sbitmap code from Omar).

Do check how many free exchanges are actually present for the stalling CPU; it might be that you run into a starvation issue.

Cheers,

Hannes
--
Dr. Hannes Reinecke                   zSeries & Storage
h...@suse.de                          +49 911 74053 688
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg
GF: J. Hawn, J. Guild, F. Imendörffer, HRB 16746 (AG Nürnberg)
_______________________________________________
fcoe-devel mailing list
fcoe-devel@open-fcoe.org
http://lists.open-fcoe.org/mailman/listinfo/fcoe-devel

Reply via email to