Re: [CentOS] CentOS 8 LSI SAS2004 Driver

2020-09-19 Thread John Pierce
that chip should use the MPT2SAS driver, same as the more common SAS2008.
 a complication for both of those is they can be flashed to be either IT
(initiator-terminator mode, a pure SAS HBA), or IR (integrated raid, a
rather weak implementation of hardware raid).AFAIK, the MPT2 driver is
for IT mode, so if the board identifies itself as IR, I believe it uses a
different 'megaraid' driver.

On Sat, Sep 19, 2020 at 8:04 PM William Markuske 
wrote:

> Hello,
>
> I've recently been given domain over a number of supermicro storage
> servers using Broadcom / LSI SAS2004 PCI-Express Fusion-MPT SAS-2
> [Spitfire] (rev 03) to run a bunch of SSDs. I was attempting to do fresh
> installs of CentOS 8 and have come to find out that RedHat deprecated
> support for a number of HBAs for 8 including all running the SAS2004 chip.
>
> Does anyone know if there is a driver available for this chip from a
> third party repo? My google searches have led me to believe that EPEL 8
> has a kmod-mpt3sas package but it does not seem to exist though multiple
> blogs have stated otherwise. If anyone knows if there is a solution for
> CentOS 8 that would be great or if I have to roll back to CentOS 7 for
> card support.
>
> Thanks,
>
> William
>
> ___
> CentOS mailing list
> CentOS@centos.org
> https://lists.centos.org/mailman/listinfo/centos
>


-- 
-john r pierce
  recycling used bits in santa cruz
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS 8 LSI SAS2004 Driver

2020-09-19 Thread Akemi Yagi
On Sat, Sep 19, 2020 at 8:04 PM William Markuske  wrote:
>
> Hello,
>
> I've recently been given domain over a number of supermicro storage
> servers using Broadcom / LSI SAS2004 PCI-Express Fusion-MPT SAS-2
> [Spitfire] (rev 03) to run a bunch of SSDs. I was attempting to do fresh
> installs of CentOS 8 and have come to find out that RedHat deprecated
> support for a number of HBAs for 8 including all running the SAS2004 chip.
>
> Does anyone know if there is a driver available for this chip from a
> third party repo? My google searches have led me to believe that EPEL 8
> has a kmod-mpt3sas package but it does not seem to exist though multiple
> blogs have stated otherwise. If anyone knows if there is a solution for
> CentOS 8 that would be great or if I have to roll back to CentOS 7 for
> card support.
>
> Thanks,
>
> William

It is ELRepo, not EPEL. :)

http://elrepoproject.blogspot.com/2019/08/rhel-80-and-support-for-removed-adapters.html

Akemi
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] CentOS 8 LSI SAS2004 Driver

2020-09-19 Thread William Markuske
Hello,

I've recently been given domain over a number of supermicro storage
servers using Broadcom / LSI SAS2004 PCI-Express Fusion-MPT SAS-2
[Spitfire] (rev 03) to run a bunch of SSDs. I was attempting to do fresh
installs of CentOS 8 and have come to find out that RedHat deprecated
support for a number of HBAs for 8 including all running the SAS2004 chip.

Does anyone know if there is a driver available for this chip from a
third party repo? My google searches have led me to believe that EPEL 8
has a kmod-mpt3sas package but it does not seem to exist though multiple
blogs have stated otherwise. If anyone knows if there is a solution for
CentOS 8 that would be great or if I have to roll back to CentOS 7 for
card support.

Thanks,

William

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Drive failed in 4-drive md RAID 10

2020-09-19 Thread Kenneth Porter
--On Friday, September 18, 2020 10:53 PM +0200 Simon Matter 
 wrote:



mdadm --remove /dev/md127 /dev/sdf1

and then the same with --add should hotremove and add dev device again.

If it rebuilds fine it may again work for a long time.


This worked like a charm. When I added it back, it told me it was 
"re-adding" the drive, so it recognized the drive I'd just removed. I 
checked /proc/mdstat and it showed rebuilding. It took about 90 minutes to 
finish and is now running fine.


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] storage for mailserver

2020-09-19 Thread Phil Perry

On 19/09/2020 19:19, Chris Schanzle via CentOS wrote:


On 9/17/20 4:25 PM, Phil Perry wrote:

On 17/09/2020 13:35, Michael Schumacher wrote:

Hello Phil,

Wednesday, September 16, 2020, 7:40:24 PM, you wrote:

PP> You can achieve this with a hybrid RAID1 by mixing SSDs and HDDs, and
PP> marking the HDD members as --write-mostly, meaning most of the reads
PP> will come from the faster SSDs retaining much of the speed advantage,
PP> but you have the redundancy of both SSDs and HDDs in the array.

PP> Read performance is not far off native write performance of the SSD, and
PP> writes mostly cached / happen in the background so are not so noticeable
PP> on a mail server anyway.

very interesting. Do you or anybody else have experience with this
setup? Any test results to compare? I will do some testing if nobody
can come up with comparisons.


best regards
---
Michael Schumacher


Here's a few performance stats from my setup, made with fio.

Firstly a RAID1 array from 2 x WD Black 1TB drives. Second set of figures are 
the same are for a RAID1 array with the same 2 WD Black 1TB drives and a WD 
Blue NVMe (PCIe X2) added into the array, with the 2 X HDDs set to 
--write-mostly.

Sequential write QD32
147MB/s (2 x HDD RAID1)
156MB/s (1 x NVMe, 2 x HDD RAID1)

The write tests give near identical performance with and without the SSD in the 
array as once any cache has been saturated, write speeds are presumably limited 
by the slowest device in the array.

Sequential read QD32
187MB/s (2 x HDD RAID1)
1725MB/s (1 x NVMe, 2 x HDD RAID1)

Sequential read QD1
162MB/s (2 x HDD RAID1)
1296MB/s (1 x NVMe, 2 x HDD RAID1)

4K random read
712kB/s (2 x HDD RAID1)
55.0MB/s (1 x NVMe, 2 x HDD RAID1)

The read speeds are a completely different story, and the array essentially 
performs identically to the native speed of the SSD device once the slower HDDs 
are set to --write-mostly, meaning the reads are prioritized to the SSD device. 
The SSD NVMe device is limited to PCIe X2 hence why sequential read speeds top 
out at 1725MB/s. Current PCIe X4 devices should be able to double that.

To summarize, a hybrid RAID1 mixing HDDs and SSDs will have write performance 
similar to the HDD (slowest device) and read performance similar to the SSD 
(fastest device) as long as the slower HDDs are added to the array with the 
--write-mostly flag set. Obviously these are synthetic I/O tests and may not 
reflect real world application performance but at least give you a good idea 
where the underlying bottlenecks may be.



Too bad the 4k random write tests are missing above.



4k random writes QD1

with fsync=1
56.6kB/s (2 x HDD RAID1)
77.8kB/s (1 x NVMe, 2 x HDD RAID1)

with fsync=1000
1431kB/s (2 x HDD RAID1)
1760kB/s (1 x NVMe, 2 x HDD RAID1)


I have used SSD + HDD RAID1 configurations in dozens of CentOS desktops and 
servers for years and it works very well with the --write-mostly flag being set 
on the HDD.  With most reads coming from the SSD, starting programs are much 
quicker.

However, I find the write queue to be very, very small, so the system "feels" like a slow HDD system during writing.  
Yes, as per above, 4k random write performance is similar to that of a 
pure HDD RAID array.


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] storage for mailserver

2020-09-19 Thread Gordon Messmer

On 9/16/20 10:40 AM, Phil Perry wrote:
You can achieve this with a hybrid RAID1 by mixing SSDs and HDDs, and 
marking the HDD members as --write-mostly, meaning most of the reads 
will come from the faster SSDs retaining much of the speed advantage, 
but you have the redundancy of both SSDs and HDDs in the array. 



Was the write-behind crash bug ever actually fixed?  I don't see it in 
more recent release notes, but the bug listed isn't public, so I can't 
check its status.


https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/7.6_release_notes/known_issues_kernel


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] storage for mailserver

2020-09-19 Thread Chris Schanzle via CentOS

On 9/17/20 4:25 PM, Phil Perry wrote:
> On 17/09/2020 13:35, Michael Schumacher wrote:
>> Hello Phil,
>>
>> Wednesday, September 16, 2020, 7:40:24 PM, you wrote:
>>
>> PP> You can achieve this with a hybrid RAID1 by mixing SSDs and HDDs, and
>> PP> marking the HDD members as --write-mostly, meaning most of the reads
>> PP> will come from the faster SSDs retaining much of the speed advantage,
>> PP> but you have the redundancy of both SSDs and HDDs in the array.
>>
>> PP> Read performance is not far off native write performance of the SSD, and
>> PP> writes mostly cached / happen in the background so are not so noticeable
>> PP> on a mail server anyway.
>>
>> very interesting. Do you or anybody else have experience with this
>> setup? Any test results to compare? I will do some testing if nobody
>> can come up with comparisons.
>>
>>
>> best regards
>> ---
>> Michael Schumacher
>
> Here's a few performance stats from my setup, made with fio.
>
> Firstly a RAID1 array from 2 x WD Black 1TB drives. Second set of figures are 
> the same are for a RAID1 array with the same 2 WD Black 1TB drives and a WD 
> Blue NVMe (PCIe X2) added into the array, with the 2 X HDDs set to 
> --write-mostly.
>
> Sequential write QD32
> 147MB/s (2 x HDD RAID1)
> 156MB/s (1 x NVMe, 2 x HDD RAID1)
>
> The write tests give near identical performance with and without the SSD in 
> the array as once any cache has been saturated, write speeds are presumably 
> limited by the slowest device in the array.
>
> Sequential read QD32
> 187MB/s (2 x HDD RAID1)
> 1725MB/s (1 x NVMe, 2 x HDD RAID1)
>
> Sequential read QD1
> 162MB/s (2 x HDD RAID1)
> 1296MB/s (1 x NVMe, 2 x HDD RAID1)
>
> 4K random read
> 712kB/s (2 x HDD RAID1)
> 55.0MB/s (1 x NVMe, 2 x HDD RAID1)
>
> The read speeds are a completely different story, and the array essentially 
> performs identically to the native speed of the SSD device once the slower 
> HDDs are set to --write-mostly, meaning the reads are prioritized to the SSD 
> device. The SSD NVMe device is limited to PCIe X2 hence why sequential read 
> speeds top out at 1725MB/s. Current PCIe X4 devices should be able to double 
> that.
>
> To summarize, a hybrid RAID1 mixing HDDs and SSDs will have write performance 
> similar to the HDD (slowest device) and read performance similar to the SSD 
> (fastest device) as long as the slower HDDs are added to the array with the 
> --write-mostly flag set. Obviously these are synthetic I/O tests and may not 
> reflect real world application performance but at least give you a good idea 
> where the underlying bottlenecks may be.


Too bad the 4k random write tests are missing above.

I have used SSD + HDD RAID1 configurations in dozens of CentOS desktops and 
servers for years and it works very well with the --write-mostly flag being set 
on the HDD.  With most reads coming from the SSD, starting programs are much 
quicker.

However, I find the write queue to be very, very small, so the system "feels" 
like a slow HDD system during writing.  But it is possible to configure an 
extended write-behind buffer/queue which will greatly improve 'bursty' write 
performance (e.g., Yum/DNF updates or unpacking a tarball with many small 
files).

Do test, lest some kernel bugs over the years, such as [1], rear their ugly 
head (you will get a panic quickly).  The bug returned at some point and I gave 
up hope upstream would not break it again.  For desktops, it left me unable to 
boot and required console access to fix.

In short, use 'mdadm --examine-bitmap' on a component (not the md device 
itself) and look at "Write Mode."  I set it to the maximum of 16383 which must 
be done when the bitmap is created, so remove the bitmap and create a new one 
with the options you prefer:

mdadm /dev/mdX --grow --bitmap=none
mdadm /dev/mdX --grow --bitmap=internal --bitmap-chunk=512M --write-behind=16383

Note sync_action must be idle if you decide to script this.  Bigger 
bitmap-chunks are my preference, but might not be yours.  Your mileage and 
performance may differ.  :-)

I've been meaning to test big write-behind's on my CentOS 8 systems...

[1] https://bugzilla.redhat.com/show_bug.cgi?id=1582673  (login required to 
view)



___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS-virt] CentOS 8 Install as DOMU in PV Environment

2020-09-19 Thread Chris Wik
We tried to get CentOS 8 domU working in PV mode as well but did not have any 
success and ended up deploying it in HVM mode.

The reason OP have was lack of hardware support for HVM. This wasn't our 
rationale for wanting to run in PV mode. Our rationale was that we prefer to 
deploy CentOS 7 VMs on LVs which are formatted and deployed from an image on 
dom0 and don't have any partition table. This makes snapshotting, mounting, 
backing up and migrating very simple. We have written a number of scripts over 
the years that needed extensive modification to work with HVM VMs but in the 
end we ended up doing it because we couldn't get PV mode working. And we 
accepted that HVM is the future so might as well take the opportunity to adapt 
our ways.

If anyone manages to get PV mode working I'd still like to know.

Chris


On September 19, 2020 7:08:28 PM GMT+02:00, "Radosław Piliszek" 
 wrote:
>Hi,
>
>In general, PV tends not to be supported in newer distribution
>releases.
>This is mostly due to HVM performance and flexibility nowadays, which
>just was not the case back in the days when PV ruled.
>
>I am curious why you are trying PV.
>
>-yoctozepto
>
>On Sat, Sep 19, 2020 at 6:41 PM 9f9dcad3f78905b03201--- via
>CentOS-virt  wrote:
>>
>> All,
>>
>> Just wanted to check one last time before letting this thread die.
>>
>> I am curious if anyone has gotten CentOS 8 to work in a PV Xen
>environment.
>>
>>
>> Thanks.
>>
>>
>> <9f9dcad3f78905b03...@bcirpg.com> wrote:
>> >All,
>> >
>> >I have successfully installed CentOS 7 on a PV environment, and have
>been trying to see if I can can get a CentOS 8 install running.
>> >
>> >Hardware does not support virtualization extensions, hence the PV
>environment and I cant do HVM for the install then migrate.
>> >
>> >My understanding is that PV support is in the kernel, and that the
>distro of Linux shouldnt technically matter. But currently when
>trying to PXEBoot using a CentOS 8 kernel and ram image I
>get a near instant crash for an invalid kernel.
>> >
>> >I tried to get around the issue by using DOM0 kernel and Ram Disk
>for the install (DOM0 is Debian 10), having the boot progress until it
>reaches the following, looping ISCSI error:
>> >
>> >[  OK  ] Reached target Slices.
>> > Starting Create Static Device Nodes in /dev...
>> >[  OK  ] Started iSCSI UserSpace I/O driver.
>> >[  OK  ] Started Setup Virtual Console.
>> > Starting dracut cmdline hook...
>> >[  OK  ] Started Apply Kernel Variables.
>> >[  OK  ] Stopped iSCSI UserSpace I/O driver.
>> > Starting iSCSI UserSpace I/O driver...
>> >
>> >I have also tried the CentOS 7 kernel Ram Disk with the same
>results.
>> >
>> >I even tried installing CentOS 7 clean, then upgrading in place (by
>unofficial and unsupported means) and was left with an error that
>pygrub couldnt find the partition with the kernel.
>> >
>> >Is this is a bug, or is PV just not supported? Or am I doing
>something wrong?
>> >
>> >Config for the install is below:
>> >
>> ># Kernel paths for install
>> >#kernel =
>/var/opt/xen/ISO_Store/Centos8PXEBoot/vmlinuz
>> >kernel = /vmlinuz
>> >#ramdisk =
>/var/opt/xen/ISO_Store/Centos8PXEBoot/initrd.img
>> >ramdisk = /initrd.img
>> >extra=modules=loop,squashfs console=hvc0
>> >
>> ># Path to HDD and iso file
>> >disk = [
>> >#file:/vmdisk0,xvda,w
>> >format=raw, vdev=xvda, access=w,
>target=/dev/mapper/vg_1-virtualmachine,
>> >   ]
>> >
>> >extra=ksdevice=
>inst.repo=https://mirror.jaleco.com/centos/8.2.2004/isos/x86_64/
>nameserver=1.1.1.1
>> >
>> ># Network configuration
>> >vif = [bridge=xenbr0]
>> >
>> >#DomU Settings
>> >memory = 3072
>> >name = centos-8.2
>> >
>> >Thank you to all.
>> >___
>> >CentOS-virt mailing list
>> >CentOS-virt@centos.org
>> >https://lists.centos.org/mailman/listinfo/centos-virt
>> ___
>> CentOS-virt mailing list
>> CentOS-virt@centos.org
>> https://lists.centos.org/mailman/listinfo/centos-virt
>___
>CentOS-virt mailing list
>CentOS-virt@centos.org
>https://lists.centos.org/mailman/listinfo/centos-virt

-- 
Chris Wik
Anu Internet Services
www.cwik.ch | www.anu.net___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] CentOS 8 Install as DOMU in PV Environment

2020-09-19 Thread Radosław Piliszek
Hi,

In general, PV tends not to be supported in newer distribution releases.
This is mostly due to HVM performance and flexibility nowadays, which
just was not the case back in the days when PV ruled.

I am curious why you are trying PV.

-yoctozepto

On Sat, Sep 19, 2020 at 6:41 PM 9f9dcad3f78905b03201--- via
CentOS-virt  wrote:
>
> All,
>
> Just wanted to check one last time before letting this thread die.
>
> I am curious if anyone has gotten CentOS 8 to work in a PV Xen environment.
>
>
> Thanks.
>
>
> <9f9dcad3f78905b03...@bcirpg.com> wrote:
> >All,
> >
> >I have successfully installed CentOS 7 on a PV environment, and have been 
> >trying to see if I can can get a CentOS 8 install running.
> >
> >Hardware does not support virtualization extensions, hence the PV 
> >environment and I cant do HVM for the install then migrate.
> >
> >My understanding is that PV support is in the kernel, and that the distro of 
> >Linux shouldnt technically matter. But currently when trying to 
> >PXEBoot using a CentOS 8 kernel and ram image I get a near 
> >instant crash for an invalid kernel.
> >
> >I tried to get around the issue by using DOM0 kernel and Ram Disk for the 
> >install (DOM0 is Debian 10), having the boot progress until it reaches the 
> >following, looping ISCSI error:
> >
> >[  OK  ] Reached target Slices.
> > Starting Create Static Device Nodes in /dev...
> >[  OK  ] Started iSCSI UserSpace I/O driver.
> >[  OK  ] Started Setup Virtual Console.
> > Starting dracut cmdline hook...
> >[  OK  ] Started Apply Kernel Variables.
> >[  OK  ] Stopped iSCSI UserSpace I/O driver.
> > Starting iSCSI UserSpace I/O driver...
> >
> >I have also tried the CentOS 7 kernel Ram Disk with the same results.
> >
> >I even tried installing CentOS 7 clean, then upgrading in place (by 
> >unofficial and unsupported means) and was left with an error that pygrub 
> >couldnt find the partition with the kernel.
> >
> >Is this is a bug, or is PV just not supported? Or am I doing something wrong?
> >
> >Config for the install is below:
> >
> ># Kernel paths for install
> >#kernel = /var/opt/xen/ISO_Store/Centos8PXEBoot/vmlinuz
> >kernel = /vmlinuz
> >#ramdisk = /var/opt/xen/ISO_Store/Centos8PXEBoot/initrd.img
> >ramdisk = /initrd.img
> >extra=modules=loop,squashfs console=hvc0
> >
> ># Path to HDD and iso file
> >disk = [
> >#file:/vmdisk0,xvda,w
> >format=raw, vdev=xvda, access=w, 
> >target=/dev/mapper/vg_1-virtualmachine,
> >   ]
> >
> >extra=ksdevice= 
> >inst.repo=https://mirror.jaleco.com/centos/8.2.2004/isos/x86_64/ 
> >nameserver=1.1.1.1
> >
> ># Network configuration
> >vif = [bridge=xenbr0]
> >
> >#DomU Settings
> >memory = 3072
> >name = centos-8.2
> >
> >Thank you to all.
> >___
> >CentOS-virt mailing list
> >CentOS-virt@centos.org
> >https://lists.centos.org/mailman/listinfo/centos-virt
> ___
> CentOS-virt mailing list
> CentOS-virt@centos.org
> https://lists.centos.org/mailman/listinfo/centos-virt
___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt


Re: [CentOS-virt] CentOS 8 Install as DOMU in PV Environment

2020-09-19 Thread 9f9dcad3f78905b03201--- via CentOS-virt
All,

Just wanted to check one last time before letting this thread die.

I am curious if anyone has gotten CentOS 8 to work in a PV Xen environment.


Thanks.


<9f9dcad3f78905b03...@bcirpg.com> wrote:
>All,
>
>I have successfully installed CentOS 7 on a PV environment, and have been 
>trying to see if I can can get a CentOS 8 install running.
>
>Hardware does not support virtualization extensions, hence the PV environment 
>and I cant do HVM for the install then migrate.
>
>My understanding is that PV support is in the kernel, and that the distro of 
>Linux shouldnt technically matter. But currently when trying to 
>PXEBoot using a CentOS 8 kernel and ram image I get a near instant 
>crash for an invalid kernel.
>
>I tried to get around the issue by using DOM0 kernel and Ram Disk for the 
>install (DOM0 is Debian 10), having the boot progress until it reaches the 
>following, looping ISCSI error:
>
>[  OK  ] Reached target Slices.
> Starting Create Static Device Nodes in /dev...
>[  OK  ] Started iSCSI UserSpace I/O driver.
>[  OK  ] Started Setup Virtual Console.
> Starting dracut cmdline hook...
>[  OK  ] Started Apply Kernel Variables.
>[  OK  ] Stopped iSCSI UserSpace I/O driver.
> Starting iSCSI UserSpace I/O driver...
>
>I have also tried the CentOS 7 kernel Ram Disk with the same results.
>
>I even tried installing CentOS 7 clean, then upgrading in place (by unofficial 
>and unsupported means) and was left with an error that pygrub couldnt find the 
>partition with the kernel.
>
>Is this is a bug, or is PV just not supported? Or am I doing something wrong?
>
>Config for the install is below:
>
># Kernel paths for install
>#kernel = /var/opt/xen/ISO_Store/Centos8PXEBoot/vmlinuz
>kernel = /vmlinuz
>#ramdisk = /var/opt/xen/ISO_Store/Centos8PXEBoot/initrd.img
>ramdisk = /initrd.img
>extra=modules=loop,squashfs console=hvc0
>
># Path to HDD and iso file
>disk = [
>#file:/vmdisk0,xvda,w
>format=raw, vdev=xvda, access=w, 
>target=/dev/mapper/vg_1-virtualmachine,
>   ]
>
>extra=ksdevice= 
>inst.repo=https://mirror.jaleco.com/centos/8.2.2004/isos/x86_64/ 
>nameserver=1.1.1.1
>
># Network configuration
>vif = [bridge=xenbr0]
>
>#DomU Settings
>memory = 3072
>name = centos-8.2
>
>Thank you to all.
>___
>CentOS-virt mailing list
>CentOS-virt@centos.org
>https://lists.centos.org/mailman/listinfo/centos-virt
___
CentOS-virt mailing list
CentOS-virt@centos.org
https://lists.centos.org/mailman/listinfo/centos-virt