We currently have a customer evaluating a system with 16 x 12Gb/s SAS drives driven by an HBA. They have intentions of building systems with 24 bays and 24 SAS drives.

Maybe we are going about this all wrong and perhaps there is a smarter way to do this....however....

The customer's software is based on FreeBSD/NAS4Free. This runs on Proxmox VE as a VM.

The VM's conf file looks like this snippet:

virtio0: /dev/sda
virtio1: /dev/sdb
virtio2: /dev/sdc
virtio3: /dev/sdd
virtio4: /dev/sde
virtio5: /dev/sdf
virtio6: /dev/sdg
virtio7: /dev/sdh

This is to allow the software to see the drives in the GUI and also to provide max performance....we think. We are also considering PCI passthrough options to have the VM see the HBA card directly.

The FreeBSD system will then use these drives to create a zpool.

With a limit of 16 Virtio disks, it seems that we can't use this approach to operate 24 disks.

Any suggestions? Any creative approaches to work around this problem?

Thanks.


On 07/31/2015 02:25 AM, Thomas Lamprecht wrote:
I wonder where 16 VirtIO disk per VM isn't enough, at the moment I can't
really see an usecase were its necessary to use 16 different VirtIO
disks on a single virtual machine.

On 07/30/2015 10:25 PM, Keri Alleyne wrote:
Good day,

I'm monitoring this thread:
https://forum.proxmox.com/threads/9782-There-is-now-a-limit-of-virtio-devices-drives


"Quote Originally Posted by dietmar

You can have 4 IDE disks, 14 SCSI disks, 16 VIRTIO disks and 6 SATA
disks (= 40 disks)."


Are we still limited to 16 VIRTIO disks on the recent versions of
Proxmox VE 3.4?

Thanks.
_______________________________________________
pve-user mailing list
[email protected]
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user



_______________________________________________
pve-user mailing list
[email protected]
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user
_______________________________________________
pve-user mailing list
[email protected]
http://pve.proxmox.com/cgi-bin/mailman/listinfo/pve-user

Reply via email to