Hi, you have to create a physical volume for each LUN Pvcreate /dev/dm-0 Pvcreate /dev/dm-1
Then you can create a volumegroup Vgcreate groupname /dev/dm-0 And extend the group with the second volume Vgextend groupname /dev/dm-1 After that you can mount the lvm group aus storage in webinterface ############################################################### Hi again^^, I miss interpreted the benchmark result, with writeback cache is 5500 MB/s so thats ok... with direct IO it gets about 100 MB/s But I have still no clue how I could use multipath with proxmox web interface. regards, Harald Am 21.06.2016 um 19:15 schrieb Harald Leithner: > Hi, > > I have a MSA2312 connected with 1gbit to a 3 node PVE 4.2 Cluster. > I managed to get ISCSI and LVM working and attached it to a VM. > > Atm I didn't manage to get multipath working with PVE because the > Wikipage tells me how to create multipath (which works) but not how > PVE uses it for LVM VGs... maybe someone give me a hint. > > Anyway the main problem is that fio only gives me a read performance > of > 5 mb/s if I attach a disk as virtio or scsi device with cache "Writeback". > > If I attach the same SAN Drive directly in the VM with iSCSI (with and > without multipath) I get about 112 MB/s. > > I also get this speed on the pve host. > > I use: > > fio --filename=/dev/sdb --direct=1 --rw=read --bs=1m --size=20G > --numjobs=200 --runtime=60 --group_reporting --name=file1 > > for benchmarking. > > While benchmarking the VM I see 1300% cpu usage on the host (mainly > sys > time) > > Maybe someone has a idea? > > regards > -- Harald Leithner ITronic Wiedner Hauptstraße 120/5.1, 1050 Wien, Austria Tel: +43-1-545 0 604 Mobil: +43-699-123 78 4 78 Mail: [email protected] | itronic.at _______________________________________________ pve-user mailing list [email protected] http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user _______________________________________________ pve-user mailing list [email protected] http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
