I have run into a strange problem, while setting up my Fedora 24 box for
KVM, i noticed that my raid array stopped showing as an available drive.
After a lot of troubleshooting and reinstalling linux to this PC 2 more
times, i have narrowed it down to one setting in my GRUB config.
intel_iommu=on
Just taking that one setting out and rebuilding my grub2-efi.cfg will make
the array readable again. When IOMMU is on, the system can see that there
is a raid card and array, but it lists the partition table as unknown. It
also will not successfully create a new GPT partition table on the array.
With IOMMU off, it can read the partition table and partition fine, read
and write data, and everything works fine.
My raid is 4 1TB disks in Raid 10e, GPT partition formatted NTFS.
$ lspci -v -s 03:0e.0
03:0e.0 RAID bus controller: Adaptec AAC-RAID
Subsystem: Adaptec 3805
Flags: bus master, stepping, 66MHz, medium devsel, latency 32, IRQ 57
Memory at fa600000 (64-bit, non-prefetchable) [size=2M]
Expansion ROM at fa800000 [disabled] [size=256K]
Capabilities: <access denied>
Kernel driver in use: aacraid
Kernel modules: aacraid
$ lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT
sda
├─sda1 vfat efi 9E87-0ECF
├─sda2 ext4 boot 4f8d2664-8fc0-4779-ac18-6f0c06152d27
└─sda3 crypto_LUKS ccee7bd2-4dce-4966-addb-bba6194ade93
sdb
* with IOMMU on, lsblk -f shows that it sees SDB, but with no file system
or label.
$ uname -r
4.6.3-300.fc24.x86_64
Screen shot of gnome Disks =
--
David
[email protected]
_______________________________________________
vfio-users mailing list
[email protected]
https://www.redhat.com/mailman/listinfo/vfio-users