Thanks, Miguel.  I had forgotten about the read penalty with raid5.  
I've got plenty of horsepower for the additional parity computation, but 
I was thinking it would be nearly all writes.  Knowing (now) that it 
takes two reads and two writes for a write as you pointed out, I think 
I'm seeing the problem.

Does raid5 normally assume that those reads will be across fast, low 
latency links (ie SCSI/SAS/SATA) out of the drives cache?  If so then I 
guess I'm probably creating my own problem with raid5 on iscsi, where 
the links are slower than the disks and certainly not low latency -- 
thus effectively eliminating any chance of disk-side caching.

Do you know if raid 3 and 4 also suffer the same issue?  Or a more 
accurate question - is there a raid implementation that has a minimum of 
parity disks (ideally 1) but does not have the read penalty to the 
disks?  I'd be willing to trade off controller memory footprint or 
parity computation time for such an advantage.

Thanks - great pointer!!

-Ty!

Miguel Gonzalez Castaños wrote:
> Ty! Boyack escribió:
>   
>> Folks,
>>
>> I'm working on a setup where I connect to several iSCSI targets, then 
>> setup raid-5 on those iSCSI target volumes (The iSCSI targets are 
>> themselves raid devices, we are adding a raid-5 layer here to cover the 
>> case of a controller/target failure).  We use this very large 
>> raid5-of-iscsi-targets as an LVM volume group so that we can carve out 
>> logical volumes that safely span across multiple iscsi targets.  We then 
>> present these LVM volumes as iSCSI targets using IET.
>>
>> In general, this seems to work OK, but when shoving large writes to the 
>> constructed iscsi target we see very bursty writes.  It will write 
>> quickly for about 30 seconds, then falter to almost nothing for 20 
>> seconds, then resume quick operations, and repeat.  This is visible both 
>> from the fact that during the 20 seconds of slow the filesystem is 
>> unresponsive, and also that the network use goes drastically lower 
>> during that time.
>>
>> I'm wondering a couple of things:
>>
>> 1) Is there any known problem with passing through two layers of iscsi?
>>
>> 2) Is this type of bursty traffic normal?
>>
>> 3) This may be related to the queue_depth filling up - is there an easy 
>> way to see the depth of the queue in real time? (Not the setting, but 
>> the actual state of the queue)
>>
>> I know the setup is a bit convoluted, so here another view of it.  The 
>> physical data blocks are on the left, becoming more layered to the right 
>> until we hit our "production" servers. (This is all pre-production now)
>>
>> SATA_disks -> Raid6_controller -> IET(blockio) -> (next box) -> 
>> open-iscsi -> md_raid5 -> lvm -> Logical_volume -> IET(blockio) -> (next 
>> box) -> open-iscsi -> filesystem_on_production_server
>>
>> It is when writing to that filesystem the problem becomes pronounced.
>>
>> Any insight that anyone has on this would be very welcome.
>>   
>>     
> I haven't tried make a RAID on iSCSI targets but my bet is that adding 
> another RAID 5 layer makes it more difficult. RAID 5 performs in writes 
> worse than RAID 1 since RAID 5 requires 2 writes and 2 reads when 
> performing a writing to the RAID. Try RAID 1 over the iSCSI targets to 
> see if the performance improves.
>
> Miguel
>
>   


-- 
-===========================-
  Ty! Boyack
  NREL Unix Network Manager
  [EMAIL PROTECTED]
  (970) 491-1186
-===========================-


--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups 
"open-iscsi" group.
To post to this group, send email to open-iscsi@googlegroups.com
To unsubscribe from this group, send email to [EMAIL PROTECTED]
For more options, visit this group at http://groups.google.com/group/open-iscsi
-~----------~----~----~----~------~----~------~--~---

Reply via email to