Re: [CentOS] A Blast from the past

2021-08-17 Thread Michael Schumacher


> I changed the processor to an Ice Lake and I get the problem below when I
> boot the working Haswell disk.

Did you try to update your BIOS to the most recent version? Most BIOS updates 
add code to handle more recent CPUs.

--
Michael Schumacher

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] A Blast from the past

2021-08-17 Thread Simon Matter
> Thank you for your feedback.
>
> Unfortunately the manufacturer of our application software will only
> support
> it on RHEL/CentOS 7.0. I have asked and that is all they say.
> When the CentOS 7.0 boots it does not recognise the CPU ID, flags it as a
> soft error then continues.
> The Haswell and the Ice Lake both have 28 cores but different frequencies.
> A couple of clues. At the boot prompt the server cooling fans are running
> slowly. When it hangs, after a short delay, the fans run faster and this
> is
> repeated.
> Also, when it hangs the keyboard is unresponsive and the server status
> LED's
> state that all is okay.
> If Intel adhere to the x86_64 standard for their processors then surely
> the
> only difference would be the addition functionality.
> I am trying to find a resolution as this particular application is perfect
> for our requirements.
> Mark

But, if you only install the newer kernel, does your application work on
it? If so, why not just run it that way?

Apart from that, you could install a current distribution on the host and
then let the application server run in a KVM instance. That way you can
fine tune what kind of CPU/features are provided to the VM.

Regards,
Simon

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] A Blast from the past

2021-08-17 Thread Jonathan Billings
On Tue, Aug 17, 2021 at 05:02:02PM +0100, Mark Woolfson wrote:
> Unfortunately the manufacturer of our application software will only support
> it on RHEL/CentOS 7.0. I have asked and that is all they say.

This is absurd.  The 7.0 kernel has so many vulnerabilities that are
well known and well documented, they're forcing you to run a kernel
that can be trivially exploited.  I would seriously push back with the
manufacturer.  Does it have a custom kernel module that it requires?
Or did they only test it on RHEL or CentOS 7.0 and never updated their
documentation?

In the past, I've asked vendors that tried this kind of nonsense if
they're willing to indemnify their customers for any security issues
that arise as a result of using their product. Feel free to list all
the CVEs in the current CentOS 7 kernel.  I see there are 1,125 CVEs
mentioned in the kernel changelog. It won't hold any legal water, most
likely, but it might get someone to at least look closer at the issue. 

-- 
Jonathan Billings 
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] A Blast from the past

2021-08-17 Thread Stephen John Smoogen
On Tue, 17 Aug 2021 at 12:02, Mark Woolfson  wrote:
>
> Thank you for your feedback.
>
> Unfortunately the manufacturer of our application software will only support
> it on RHEL/CentOS 7.0. I have asked and that is all they say.
> When the CentOS 7.0 boots it does not recognise the CPU ID, flags it as a
> soft error then continues.
> The Haswell and the Ice Lake both have 28 cores but different frequencies.
> A couple of clues. At the boot prompt the server cooling fans are running
> slowly. When it hangs, after a short delay, the fans run faster and this is
> repeated.
> Also, when it hangs the keyboard is unresponsive and the server status LED's
> state that all is okay.
> If Intel adhere to the x86_64 standard for their processors then surely the
> only difference would be the addition functionality.
> I am trying to find a resolution as this particular application is perfect
> for our requirements.

You will either need an older computer or a different application. A
company which only supports a .0 version of an OS usually means they
aren't really supporting their customers. They are expecting you to
run an application on an OS without any security updates or
improvements. There are 7 years of CVE's in the kernel, libraries and
other parts the customer has to live with if they want the
application. Good luck.



-- 
Stephen J Smoogen.
I've seen things you people wouldn't believe. Flame wars in
sci.astro.orion. I have seen SPAM filters overload because of Godwin's
Law. All those moments will be lost in time... like posts on  BBS...
time to reboot.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] A Blast from the past

2021-08-17 Thread Mark Woolfson
Thank you for your feedback.

Unfortunately the manufacturer of our application software will only support
it on RHEL/CentOS 7.0. I have asked and that is all they say.
When the CentOS 7.0 boots it does not recognise the CPU ID, flags it as a
soft error then continues.
The Haswell and the Ice Lake both have 28 cores but different frequencies.
A couple of clues. At the boot prompt the server cooling fans are running
slowly. When it hangs, after a short delay, the fans run faster and this is
repeated.
Also, when it hangs the keyboard is unresponsive and the server status LED's
state that all is okay.
If Intel adhere to the x86_64 standard for their processors then surely the
only difference would be the addition functionality.
I am trying to find a resolution as this particular application is perfect
for our requirements.
Mark
-Original Message-
From: CentOS  On Behalf Of Phil Perry
Sent: 17 August 2021 16:43
To: centos@centos.org
Subject: Re: [CentOS] A Blast from the past

On 17/08/2021 16:34, Simon Matter wrote:
>> Hello,
>> Can you please help with an interesting problem.
>> I have an Intel Haswell based processor with CentOS 7.0 with an early 
>> kernel booting and running perfectly.
>> I changed the processor to an Ice Lake and I get the problem below 
>> when I boot the working Haswell disk.
>> The boot process hangs almost immediately and when I remove the 'quiet'
>> boot
>> parameter I see that it hangs randomly, usually with a high CPU 
>> number, when SMPBOOT is starting up the cores.
>> The only solution I have found is to boot with the 'nr_cpus=8 (could 
>> be any low number), update to the latest kernel then reboot with the 
>> 'nr_cpus=8'
>> parameter removed.
>> On examination there are no problems with CentOS 7.4 and above but 
>> there are with CentOS 7.3 and below.
> 
> I think the issue is quite clear here: the newer CPU is not handled 
> correctly by the old kernel - maybe it even doesn't know this CPU type 
> and doesn't know how to detect the number of cores it has.
> 
> I don't think there is a better solution than what you already did.
> 
> Regards,
> Simon
> 

Exactly what I was thinking. CentOS 7.0 released in 2014, Intel Ice Lake
released at least 5 years later so there will be no support for Ice Lake in
the CentOS 7.0 kernel. I'm surprised there is any support in 7.4. Why are
you not using the latest release?

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


-- 
This email has been checked for viruses by AVG.
https://www.avg.com

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] A Blast from the past

2021-08-17 Thread Phil Perry

On 17/08/2021 16:34, Simon Matter wrote:

Hello,
Can you please help with an interesting problem.
I have an Intel Haswell based processor with CentOS 7.0 with an early
kernel
booting and running perfectly.
I changed the processor to an Ice Lake and I get the problem below when I
boot the working Haswell disk.
The boot process hangs almost immediately and when I remove the 'quiet'
boot
parameter I see that it hangs randomly, usually with a high CPU number,
when
SMPBOOT is starting up the cores.
The only solution I have found is to boot with the 'nr_cpus=8 (could be
any
low number), update to the latest kernel then reboot with the 'nr_cpus=8'
parameter removed.
On examination there are no problems with CentOS 7.4 and above but there
are
with CentOS 7.3 and below.


I think the issue is quite clear here: the newer CPU is not handled
correctly by the old kernel - maybe it even doesn't know this CPU type and
doesn't know how to detect the number of cores it has.

I don't think there is a better solution than what you already did.

Regards,
Simon



Exactly what I was thinking. CentOS 7.0 released in 2014, Intel Ice Lake 
released at least 5 years later so there will be no support for Ice Lake 
in the CentOS 7.0 kernel. I'm surprised there is any support in 7.4. Why 
are you not using the latest release?


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] A Blast from the past

2021-08-17 Thread Simon Matter
> Hello,
> Can you please help with an interesting problem.
> I have an Intel Haswell based processor with CentOS 7.0 with an early
> kernel
> booting and running perfectly.
> I changed the processor to an Ice Lake and I get the problem below when I
> boot the working Haswell disk.
> The boot process hangs almost immediately and when I remove the 'quiet'
> boot
> parameter I see that it hangs randomly, usually with a high CPU number,
> when
> SMPBOOT is starting up the cores.
> The only solution I have found is to boot with the 'nr_cpus=8 (could be
> any
> low number), update to the latest kernel then reboot with the 'nr_cpus=8'
> parameter removed.
> On examination there are no problems with CentOS 7.4 and above but there
> are
> with CentOS 7.3 and below.

I think the issue is quite clear here: the newer CPU is not handled
correctly by the old kernel - maybe it even doesn't know this CPU type and
doesn't know how to detect the number of cores it has.

I don't think there is a better solution than what you already did.

Regards,
Simon

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] NFS Share fails to mount at boot time

2021-08-17 Thread Simon Matter
> My suggestion - Add "_netdev" to the parameters list:
>
>   NAS2HOST:/volume1/export/  /mnt/NAS2 nfs
> _netdev,rw,vers=3,soft,bg,intr0 0

And, if it doesn't work, try this instead and please let us know which one
worked best:

NAS2HOST:/volume1/export/  /mnt/NAS2 nfs
rw,vers=3,soft,bg,intr,x-systemd.requires=network-online.target 0 0

The 'x-systemd.requires=network-online.target' makes sure that this NFS
mount is _only_ mounted fter a network interface is really online.

And I'm wondering why systemd doesn't do this by default because NFS
mounts are always _only_ possible with an online network.

Can someone explain to me the logic of what systemd does here?

Regards,
Simon

>
> 
> Bill Gee
>
>
> On Tuesday, August 17, 2021 9:18:53 AM CDT Felix Natter wrote:
>> hello fellow CentOS Users,
>>
>> on Scientific Linux 7 (_very_ similar to CentOS7), I get this when
>> trying to mount NFS Shares (exported from Synology NAS) automatically at
>> boot time:
>>
>> [root@HOST ~]# journalctl -b 0 | grep NAS[20]
>> Jul 01 13:32:09 HOST systemd[1]: Mounting /mnt/NAS0...
>> Jul 01 13:32:09 HOST systemd[1]: Mounting /mnt/NAS2...
>> Jul 01 13:32:09 HOST systemd[1]: mnt-NAS0.mount mount process exited,
>> code=exited status=32
>> Jul 01 13:32:09 HOST systemd[1]: Failed to mount /mnt/NAS0.
>> Jul 01 13:32:09 HOST systemd[1]: Unit mnt-NAS0.mount entered failed
>> state.
>> Jul 01 13:32:09 HOST systemd[1]: mnt-NAS2.mount mount process exited,
>> code=exited status=32
>> Jul 01 13:32:09 HOST systemd[1]: Failed to mount /mnt/NAS2.
>> Jul 01 13:32:09 HOST systemd[1]: Unit mnt-NAS2.mount entered failed
>> state.
>>
>> I read that enabling NetworkManager-wait-online.service can mitigate
>> that, but it's already enabled:
>>
>> [root@HOST ~]# systemctl list-unit-files|grep wait
>> chrony-wait.service   disabled
>> NetworkManager-wait-online.serviceenabled
>> plymouth-quit-wait.servicedisabled
>>
>> /mnt/NAS2 is defined in /etc/fstab (/mnt/NAS0 is mounted analogously):
>>
>> NAS2HOST:/volume1/export/  /mnt/NAS2 nfs rw,vers=3,soft,bg,intr
>>   0 0
>>
>> This does not always occur, and it seems to be a race condition, because
>> it did not occur a few months ago, before we moved offices (when only
>> the networking changed slightly).
>>
>> Of course, once the computer is booted, I can always mount the shares
>> without problems.
>>
>> Does someone have an idea?
>>
>> Many Thanks and Best Regards,
>>
>
>
>
>
> ___
> CentOS mailing list
> CentOS@centos.org
> https://lists.centos.org/mailman/listinfo/centos
>


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] A Blast from the past

2021-08-17 Thread Mark Woolfson
Hello,
Can you please help with an interesting problem.
I have an Intel Haswell based processor with CentOS 7.0 with an early kernel
booting and running perfectly.
I changed the processor to an Ice Lake and I get the problem below when I
boot the working Haswell disk.
The boot process hangs almost immediately and when I remove the 'quiet' boot
parameter I see that it hangs randomly, usually with a high CPU number, when
SMPBOOT is starting up the cores.
The only solution I have found is to boot with the 'nr_cpus=8 (could be any
low number), update to the latest kernel then reboot with the 'nr_cpus=8'
parameter removed.
On examination there are no problems with CentOS 7.4 and above but there are
with CentOS 7.3 and below.
Mark



-- 
This email has been checked for viruses by AVG.
https://www.avg.com

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] NFS Share fails to mount at boot time

2021-08-17 Thread Bill Gee
My suggestion - Add "_netdev" to the parameters list:

NAS2HOST:/volume1/export/  /mnt/NAS2 nfs 
_netdev,rw,vers=3,soft,bg,intr0 0


Bill Gee


On Tuesday, August 17, 2021 9:18:53 AM CDT Felix Natter wrote:
> hello fellow CentOS Users,
> 
> on Scientific Linux 7 (_very_ similar to CentOS7), I get this when
> trying to mount NFS Shares (exported from Synology NAS) automatically at
> boot time:
> 
> [root@HOST ~]# journalctl -b 0 | grep NAS[20]
> Jul 01 13:32:09 HOST systemd[1]: Mounting /mnt/NAS0...
> Jul 01 13:32:09 HOST systemd[1]: Mounting /mnt/NAS2...
> Jul 01 13:32:09 HOST systemd[1]: mnt-NAS0.mount mount process exited, 
> code=exited status=32
> Jul 01 13:32:09 HOST systemd[1]: Failed to mount /mnt/NAS0.
> Jul 01 13:32:09 HOST systemd[1]: Unit mnt-NAS0.mount entered failed state.
> Jul 01 13:32:09 HOST systemd[1]: mnt-NAS2.mount mount process exited, 
> code=exited status=32
> Jul 01 13:32:09 HOST systemd[1]: Failed to mount /mnt/NAS2.
> Jul 01 13:32:09 HOST systemd[1]: Unit mnt-NAS2.mount entered failed state.
> 
> I read that enabling NetworkManager-wait-online.service can mitigate
> that, but it's already enabled:
> 
> [root@HOST ~]# systemctl list-unit-files|grep wait
> chrony-wait.service   disabled
> NetworkManager-wait-online.serviceenabled 
> plymouth-quit-wait.servicedisabled
> 
> /mnt/NAS2 is defined in /etc/fstab (/mnt/NAS0 is mounted analogously):
> 
> NAS2HOST:/volume1/export/  /mnt/NAS2 nfs rw,vers=3,soft,bg,intr   
>  0 0
> 
> This does not always occur, and it seems to be a race condition, because
> it did not occur a few months ago, before we moved offices (when only
> the networking changed slightly).
> 
> Of course, once the computer is booted, I can always mount the shares
> without problems.
> 
> Does someone have an idea?
> 
> Many Thanks and Best Regards,
> 




___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] NFS Share fails to mount at boot time

2021-08-17 Thread Felix Natter
hello fellow CentOS Users,

on Scientific Linux 7 (_very_ similar to CentOS7), I get this when
trying to mount NFS Shares (exported from Synology NAS) automatically at
boot time:

[root@HOST ~]# journalctl -b 0 | grep NAS[20]
Jul 01 13:32:09 HOST systemd[1]: Mounting /mnt/NAS0...
Jul 01 13:32:09 HOST systemd[1]: Mounting /mnt/NAS2...
Jul 01 13:32:09 HOST systemd[1]: mnt-NAS0.mount mount process exited, 
code=exited status=32
Jul 01 13:32:09 HOST systemd[1]: Failed to mount /mnt/NAS0.
Jul 01 13:32:09 HOST systemd[1]: Unit mnt-NAS0.mount entered failed state.
Jul 01 13:32:09 HOST systemd[1]: mnt-NAS2.mount mount process exited, 
code=exited status=32
Jul 01 13:32:09 HOST systemd[1]: Failed to mount /mnt/NAS2.
Jul 01 13:32:09 HOST systemd[1]: Unit mnt-NAS2.mount entered failed state.

I read that enabling NetworkManager-wait-online.service can mitigate
that, but it's already enabled:

[root@HOST ~]# systemctl list-unit-files|grep wait
chrony-wait.service   disabled
NetworkManager-wait-online.serviceenabled 
plymouth-quit-wait.servicedisabled

/mnt/NAS2 is defined in /etc/fstab (/mnt/NAS0 is mounted analogously):

NAS2HOST:/volume1/export/  /mnt/NAS2 nfs rw,vers=3,soft,bg,intr
0 0

This does not always occur, and it seems to be a race condition, because
it did not occur a few months ago, before we moved offices (when only
the networking changed slightly).

Of course, once the computer is booted, I can always mount the shares
without problems.

Does someone have an idea?

Many Thanks and Best Regards,
-- 
Felix Natter


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos