Ah, so that will edit the XML, right? Did you know anywhere in the docs
that the VM settings are mentioned?

On Thu, 5 Dec 2024 at 10:06, Nischal P <nischalnisc...@gmail.com> wrote:

> Hi Muhammad
>
>
> You can stop the VM and in the settings you can use   "nicAdapter"   And
> value "virtio"
> Start the VM and check, if the drivers are present  it should have used
> virtio in the Guest OS
>
>
>
> Thanks
> Nischal
>
>
> On Thu, 5 Dec, 2024, 4:21 am Muhammad Hanis Irfan Mohd Zaid, <
> hanisirfan.w...@gmail.com> wrote:
>
> > Okay I can edit the VM XML to use virtio NIC model instead.
> >
> >
> https://askubuntu.com/questions/552036/where-is-the-list-of-supported-device-emulations-in-kvm
> >
> > But, is there an option on the UI/API to select NIC model during/after VM
> > creation? If not, maybe a new feature that can be added in future
> versions.
> > My suggestion since CloudStack UI looks more like a public cloud use, to
> > allow changing the model only by Administrators.
> >
> > On Wed, 4 Dec 2024, 12:36 Muhammad Hanis Irfan Mohd Zaid, <
> > hanisirfan.w...@gmail.com> wrote:
> >
> > > I have a VM running Windows Server 2022 on a KVM hypervisor. Currently,
> > > it's using Intel PRO/1000 MT as the network adapter. I would like for
> it
> > to
> > > use VirtIO NIC as I've already installed the necessary drivers and QEMU
> > > agent. How can I do it? Based on this (
> > >
> >
> https://users.cloudstack.apache.narkive.com/wDIB00YQ/how-do-you-customize-cs-vms-on-kvm
> > > ):
> > >
> > >
> > >
> > >
> > >
> > > *Normally (at least with KVM):- The base disk (OS) will be IDE- All
> extra
> > > disks will be VirtIO- Networking depends on the 'OS Type' (not sure
> where
> > > this is defined atthe backend)- To use VirtIO Networking, select the OS
> > > Type of 'Other PV (xx-bit)'*
> > >
> > > Any way to change the defaults?
> > >
> > > Another question is regarding Ceph RDB performance for primary storage.
> > > We've a virtualized Ceph cluster running on vSphere (I know is not the
> > best
> > > setup). The underlying ESXi is running at least 10G NICs. The Ceph VMs
> > are
> > > using VMXNET3 for its NIC, and SCSI controller for the disks. vSphere
> > > datastore is using iSCSI SAN from Dell with, maybe as I remembered,
> > 7200RPM
> > > HDDs. 5 Ceph nodes, 2 OSD (1 TiB) each. I'm getting around 11 MiB
> Client
> > > Throughput at most on Ceph. I felt quite significant slow performance
> for
> > > operations e.g. converting snapshot/volume to template, launching new
> VMs
> > > etc.
> > >
> > > Any thoughts?
> > >
> >
>

Reply via email to