Cameron Harr wrote:
Cameron Harr wrote:
Also a little disconcerting is that my average request size on the target has gotten larger. I'm always writing 512B packets, and when I run on one initiator, the average reqsz is around 600-800B. When I add an initiator, the average reqsz basically doubles and is now around 1200 - 1600B. I'm specifying direct IO in the test and scst is configured as blockio (and thus direct IO), but it appears something is cached at some point and seems to be coalesced when another initiator is involved. Does this seem odd or normal? This shows true whether the initiators are writing to different partitions on the same LUN or the same LUN with no partitions.

I've been doing some testing trying to determine why my average req sz is bloated beyond the 512B packets I'm sending. It appears to me to be caused by heavy utilization of the middleware: SRPT or SCST. As I add processes on an initiator, the ave req sz goes up, and really jumps when I have more than 2 processes (running on 1 or 2 initiators) or if I'm writing to the same target LUN. My hunch is that the calculation of the ave req sz over a 1s interval is skewed due to some requests having to wait for either the IB layer or the SCST layer.

Thinking that perhaps the srpt_thread was a cause, I turned off threading there, but that caused the packet sizing to be much more wild - never dropping to 512B and growing to as much as 4KB. Using the default deadline scheduler as opposed to the default cfq scheduler didn't seem to make a difference.

I guess, you use a regular caching IO? The lowest packet size it can produce is a PAGE_SIZE (4K). Target can't change it. You can have lower packets only with O_DIRECT or sg interface. But I'm not sure it will be performance effective.

I'd recommend you to use 4K packets and deadline IO scheduler.

Cameron


_______________________________________________
general mailing list
general@lists.openfabrics.org
http://lists.openfabrics.org/cgi-bin/mailman/listinfo/general

To unsubscribe, please visit http://openib.org/mailman/listinfo/openib-general

Reply via email to