>> I would say that each path has its own device (and therefore its own queue). >> So I'd argue that you may want to have (for example) 4 paths to each LUN or >> perhaps more (8?). For example, with 2 NICs, each connecting to two >> controllers, each controller having 2 NICs (so no SPOF and nice number of >> paths).
Totally get where you are coming from with paths to LUN’s and using multipath. We do use that with the Dell Compellent storage we have. It has multiple active controllers each with an IP address in a different subnet. Unfortunately, the Equallogic does NOT have two active controllers. It has a single active controller and a single IP that migrates between the controllers when either one is active. If I don’t use LACP I can’t use both HBA’s on the host with Ovirt as it doesn’t support Dells host integration tool (HIT) software (or you could argue Dell don’t support Ovirt). So, instead of being able to have a large number of paths to devices I can either have one active path or LACP and get two. As two is the most I can have to a LUN with the infrastructure we have, we spread the IO by increasing the number of targets (storage domains). >> Depending on your storage, you may want to use rr_min_io_rq = 1 for latency >> purposes. Looking at the man page for multipath.conf it looks like the default is now 1, where it was 1000 for rr_min_io. For now I’ve just removed it from our config file and we’ll take the default. I’m still seeing the same problem with the couple of changes made (lvmetad and multipath). I’m really not very good at understanding exactly what is going on in the Ovirt logs. Does it provide any clues as to why it brings the host up and then takes it offline again? What are the barrage of lvm processes trying to achieve and why do they apparently fail (as it keeps on trying to run them)? As mentioned, throughout all this I see no multipath errors (all paths available), I see no iSCSI connection errors to the Equallogic. It just seems to be Ovirt that thinks the storage is unavailable for some reason? Thanks, Mark
_______________________________________________ Users mailing list Users@ovirt.org http://lists.ovirt.org/mailman/listinfo/users