I'm working on a setup where I connect to several iSCSI targets, then 
setup raid-5 on those iSCSI target volumes (The iSCSI targets are 
themselves raid devices, we are adding a raid-5 layer here to cover the 
case of a controller/target failure).  We use this very large 
raid5-of-iscsi-targets as an LVM volume group so that we can carve out 
logical volumes that safely span across multiple iscsi targets.  We then 
present these LVM volumes as iSCSI targets using IET.

In general, this seems to work OK, but when shoving large writes to the 
constructed iscsi target we see very bursty writes.  It will write 
quickly for about 30 seconds, then falter to almost nothing for 20 
seconds, then resume quick operations, and repeat.  This is visible both 
from the fact that during the 20 seconds of slow the filesystem is 
unresponsive, and also that the network use goes drastically lower 
during that time.

I'm wondering a couple of things:

1) Is there any known problem with passing through two layers of iscsi?

2) Is this type of bursty traffic normal?

3) This may be related to the queue_depth filling up - is there an easy 
way to see the depth of the queue in real time? (Not the setting, but 
the actual state of the queue)

I know the setup is a bit convoluted, so here another view of it.  The 
physical data blocks are on the left, becoming more layered to the right 
until we hit our "production" servers. (This is all pre-production now)

SATA_disks -> Raid6_controller -> IET(blockio) -> (next box) -> 
open-iscsi -> md_raid5 -> lvm -> Logical_volume -> IET(blockio) -> (next 
box) -> open-iscsi -> filesystem_on_production_server

It is when writing to that filesystem the problem becomes pronounced.

Any insight that anyone has on this would be very welcome.




  Ty! Boyack
  NREL Unix Network Manager
  (970) 491-1186

You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at

Reply via email to