> I have set up the storage with 4x12Gbps SAS disks and a LSI 9300 HBA > controller... > locally on the storage i get with different random dd tests around > 400MB/s read, which is quite okay (i dont have cache until now). > > The storage and proxmox host is direct attached with a 20Gbpe bond and i > get around 18-19gbits with iperf.. > > when I now add a ZFS over ISCSI target (with comstar) on proxmox server > and give a test vm a virtio disk (raw) on that storage i get around > 150-170MB/s with the same dd test... > If I run the volume through NFS share I get around 270-300MB/s... > > Can you help me identify why iscsi is so slow? Why don't i get the same > read/write performance inside the vm, than on the storage directly?
Hi Steffen I don't think you can ever attain local storage performance over the network (I am thinking about the latency of fsync calls which have to go between the initiator and the target) I would advise you to use something like bonnie++ to measure your I/O, because whith dd you can't be sure if you're hitting the linux page cache or the device itself when doing I/O Did you have a look at https://pve.proxmox.com/wiki/Performance_Tweaks ? Emmanuel _______________________________________________ pve-user mailing list pve-user@pve.proxmox.com http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user