Re: [CentOS] CentOS Stream 8 sssd.service failing part of sssd-common-2.8.1-1.el8.x86_64 baseos package

2023-01-12 Thread Orion Poplawski

On 12/30/22 04:06, Jelle de Jong wrote:

On 12/27/22 22:55, Gordon Messmer wrote:

On 2022-12-25 07:44, Jelle de Jong wrote:
A recent update of the sssd-common-2.8.1-1.el8.x86_64 package is 
causing sssd.service systemctl failures all over my CentosOS machines.

...
[sssd] [confdb_expand_app_domains] (0x0010): No domains configured, 
fatal error! 



Were you previously using sssd?  Or is the problem merely that it is 
now reporting an error starting a service that you don't use?


Are there any files in /etc/sssd/conf.d, or does /etc/sssd/sssd.conf 
exist?  If so, what are the contents of those files?


What are the contents of /usr/lib/systemd/system/sssd.service?

If you run "journalctl -u sssd.service", are there any log entries 
older than the package update?


I got a monitoring system for failing services and I sudenly started 
getting dozens of notifications for all my CentOS systems that sssd was 
failing. This is after the sssd package updates, causing this 
regression. SSSD services where not really in use but some of the common 
libraries are used.


# systemctl status sssd
● sssd.service - System Security Services Daemon
    Loaded: loaded (/usr/lib/systemd/system/sssd.service; enabled; 
vendor preset: enabled)
    Active: failed (Result: exit-code) since Sat 2022-12-24 06:14:10 
UTC; 6 days ago

Condition: start condition failed at Fri 2022-12-30 11:02:01 UTC; 4s ago
    ├─ ConditionPathExists=|/etc/sssd/sssd.conf was not met
    └─ ConditionDirectoryNotEmpty=|/etc/sssd/conf.d was not met
  Main PID: 3953157 (code=exited, status=4)

Warning: Journal has been rotated since unit was started. Log output is 
incomplete or unavailable.

# ls -halt /etc/sssd/conf.d/
total 8.0K
drwx--x--x. 2 sssd sssd 4.0K Dec  8 13:08 .
drwx--. 4 sssd sssd 4.0K Dec  8 13:08 ..
# ls -halZ /etc/sssd/conf.d/
total 8.0K
drwx--x--x. 2 sssd sssd system_u:object_r:sssd_conf_t:s0 4.0K Dec  8 
13:08 .
drwx--. 4 sssd sssd system_u:object_r:sssd_conf_t:s0 4.0K Dec  8 
13:08 ..

# ls -halZ /etc/sssd/sssd.conf
ls: cannot access '/etc/sssd/sssd.conf': No such file or directory

# journalctl -u sssd.service --lines 10
-- Logs begin at Mon 2022-12-26 22:15:31 UTC, end at Fri 2022-12-30 
11:05:26 UTC. --

-- No entries --

Kind regards,

Jelle de Jong


I don't quite understand where this:
   Main PID: 3953157 (code=exited, status=4)

came from.  As it seems like sssd was started at some point and failed. 
But that shouldn't have happened because:


Condition: start condition failed at Fri 2022-12-30 11:02:01 UTC; 4s ago
├─ ConditionPathExists=|/etc/sssd/sssd.conf was not met
└─ ConditionDirectoryNotEmpty=|/etc/sssd/conf.d was not met

It's telling you that because /etc/sssd/sssd.conf does not exist and 
/etc/sssd/sssd.conf.d is not empty, the service was not started because 
the conditions were not met.  This is as expected in your case.


If you don't want it to even check, just disable the service:

systemctl disable sssd.service

I'm not sure which of these or both that your service monitoring is 
keying off of.  And perhaps by disabling it your monitoring system will 
be quiet about it.


--
Orion Poplawski
he/him/his  - surely the least important thing about me
IT Systems Manager 720-772-5637
NWRA, Boulder/CoRA Office FAX: 303-415-9702
3380 Mitchell Lane   or...@nwra.com
Boulder, CO 80301 https://www.nwra.com/

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos Stream 9 module list

2023-01-12 Thread Gionatan Danti

Il 2023-01-12 23:01 Josh Boyer ha scritto:

There have been many discussions on modularity, both on this list and
on lists like the epel and fedora devel lists, but I'll give a brief
subset.

Modularity provides parallel availability but not parallel
installatability.  Some software needs or perhaps wants to be parallel
installable.  Also, some upstream language stacks such as python have
implemented parallel availability/installability inherently in their
framework, which eliminates the need for modules.

Ultimately, the Red Hat teams are using modularity where they believe
it makes sense and using regular packaging to reduce complexity for
customers where it doesn't provide much benefit.

josh


Make sense.
Thank you for taking the time to explain.
Regards.

--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.da...@assyoma.it - i...@assyoma.it
GPG public key ID: FF5F32A8
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos Stream 9 module list

2023-01-12 Thread Josh Boyer
On Thu, Jan 12, 2023 at 3:18 PM Gionatan Danti  wrote:
>
> Il 2023-01-12 16:10 Josh Boyer ha scritto:
> > Modules are one of several packaging formats we have.  With CentOS
> > Stream 9/ RHEL 9, we took user and customer feedback on how the
> > default versions of software are packaged and determined that the
> > defaults should be normal RPMs.  Newer and alternative versions of
> > software will be delivered as modules in some cases, or as regular
> > RPMs with applicable versioning in others.
> >
> > josh
>
> Hi Josh,
> can I ask the rationale behind this decision?
>
> It seems "strange" to have some different version in the main repos,
> with versioned RPMs, and other in specific modules (which needs to be
> manually enabled).

There have been many discussions on modularity, both on this list and
on lists like the epel and fedora devel lists, but I'll give a brief
subset.

Modularity provides parallel availability but not parallel
installatability.  Some software needs or perhaps wants to be parallel
installable.  Also, some upstream language stacks such as python have
implemented parallel availability/installability inherently in their
framework, which eliminates the need for modules.

Ultimately, the Red Hat teams are using modularity where they believe
it makes sense and using regular packaging to reduce complexity for
customers where it doesn't provide much benefit.

josh

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos Stream 9 module list

2023-01-12 Thread Gionatan Danti

Il 2023-01-12 16:10 Josh Boyer ha scritto:

Modules are one of several packaging formats we have.  With CentOS
Stream 9/ RHEL 9, we took user and customer feedback on how the
default versions of software are packaged and determined that the
defaults should be normal RPMs.  Newer and alternative versions of
software will be delivered as modules in some cases, or as regular
RPMs with applicable versioning in others.

josh


Hi Josh,
can I ask the rationale behind this decision?

It seems "strange" to have some different version in the main repos, 
with versioned RPMs, and other in specific modules (which needs to be 
manually enabled).


Regards.

--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.da...@assyoma.it - i...@assyoma.it
GPG public key ID: FF5F32A8
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Upgrading system from non-RAID to RAID1

2023-01-12 Thread H
On 01/12/2023 03:00 AM, Simon Matter wrote:
>> Hallo Simon,
>>
>>> Anyway, the splitting of large disks has additional advantages. Think of
>>> what happens in case of a failure (power loss, kernel crash...). With
>>> the
>>> disk as one large chunk, the whole disk has to be resynced on restart
>>> while with smaller segments only those which are marked as dirty have to
>>> be resynced. This can make a bit difference.
>> I am not sure if this is true. If a underlying disk fails, it will mark
>> all partitions on that disk as dirty, so you will have to resync them all
>> after replacing or readding the disk into the array.
> No, I'm not talking about a complete disk failure, my example wasn't a
> failure at all but a server problem like power loss, kernel crash and such
> things. In this case only the segments which were not in sync at the time
> of the crash will be resynced on restart, not the whole disk.
>
> The same is, if a read error happens on one disk, only the partial segment
> will lose redundancy and not the whole contents of the disk.
>
> That's a huge improvement especially on very large disks.
>
> Simon
>
> ___
> CentOS mailing list
> CentOS@centos.org
> https://lists.centos.org/mailman/listinfo/centos

I have not seen anyone comment on my plan to after partitioning the new SSDs 
that I have to do a new minimal install of C7 and then copy the old disk 
partitions - with the exceptions of /boot  and /boot/efi - over the newly made 
installation?

Am I correct in that is needed since the old installation was not using RAID 
and and the new one does? Both of course are using C7.

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Centos Stream 9 module list

2023-01-12 Thread Josh Boyer
On Thu, Jan 12, 2023 at 10:08 AM Jos Vos  wrote:
>
> Hi,
>
> When I do "dnf module list --all" on CentOS Stream 8, I also see the
> stream versions installed by default, e.g. postgresql 10.
>
> But on CentOS Stream 9, I only see the newer stream version, like
> postgresql 15 and nodejs 18 (and not postgresql 13 and nodejs 16).
>
> Can anyone explain what's happening here?

Modules are one of several packaging formats we have.  With CentOS
Stream 9/ RHEL 9, we took user and customer feedback on how the
default versions of software are packaged and determined that the
defaults should be normal RPMs.  Newer and alternative versions of
software will be delivered as modules in some cases, or as regular
RPMs with applicable versioning in others.

josh

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] Centos Stream 9 module list

2023-01-12 Thread Jos Vos
Hi,

When I do "dnf module list --all" on CentOS Stream 8, I also see the
stream versions installed by default, e.g. postgresql 10.

But on CentOS Stream 9, I only see the newer stream version, like
postgresql 15 and nodejs 18 (and not postgresql 13 and nodejs 16).

Can anyone explain what's happening here?

Thanks,

-- 
--Jos Vos 
--X/OS Experts in Open Systems BV   |   Office: +31 20 6938364
--Amsterdam, The Netherlands|   Mobile: +31 6 26216181
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Looking for a RAID1 box

2023-01-12 Thread Fleur
Hi All, very interesting thread, I add my 2 cents point-of-view for free to
all of you ...

A lot af satisfaction with HP Proliant MicroServer from the first GEN6 (AMD
NEON) to the 1-year old MicroServer Gen10 X3216 (CentOS6/7/8) so I think
yours is the right choice!

In /boot/efi/ (mounted from the first partition of the first GPT disk) you
only have the grub2 efi binary, not the vmlinuz kernel or initrd image or
the grub.cfg itself ...

To be more precise a grub.cfg file exists there but it's only a static file
which has an entry to find the right one using the uuid fingerprint


*cat \EFI\ubuntu\grub.cfg*search.fs_uuid
d9f44ffb-3cb8-4783-8928-0123e5d8a149 root
set prefix=($root)'/@/boot/grub'
configfile $prefix/grub.cfg

Using an md1 software raid mirror for this FAT32 (ESP) partition is not
safe IF you use it outside of the linux environment (because the mirror
will became corrupted at the first write the other OSes will do on this
partition).

It's better to setup a separated /boot partition (yes, here an md1 linux
software raid mirror is OK) which the grub2 bootloader can manage correctly
(be sure grub2 can access his modules to understand and manage this
LVM/RAID : mdraid09,mdraid1x,lvm.mod [1] [2]

insmod raid
#and load the related `mdraid' module `mdraid09' for RAID arrays
with version 0.9 metadata, and `mdraid1x' for arrays with version 1.x
metadata.
 insmod mdraid09
 set root=(md0p1)
#or the following for an unpartitioned RAID array
 set root=(md0)

IMHO installing ex-novo is the easiest path with setup that puts all the
things correctly, building the right initramfs and putting the correct
entry in grub.cfg for the modules needed to manage raid/lvm...
To be honest I don't know how the anaconda installer manage the /dev/sda1
ESP/FAT32/EFI partitions (I'd like it clones this efi partition to the 2nd
disk, but i think it will leave /dev/sdb1 partition empty)

To understand better how GRUB2 works i've looked here : [3] [4] [5]

Happy hacking

*Fleur*

[1] : https://unix.stackexchange.com/questions/187236/grub2-lvm2-raid1-boot
[2] : https://wiki.gentoo.org/wiki/GRUB/Advanced_storage
[3] : https://www.gnu.org/software/grub/manual/grub/grub.html
[4] :
https://documentation.suse.com/sled/15-SP4/html/SLED-all/cha-grub2.html
[5] : https://wiki.archlinux.org/title/GRUB
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Upgrading system from non-RAID to RAID1

2023-01-12 Thread Gionatan Danti

Il 2023-01-12 09:00 Simon Matter ha scritto:

That's a huge improvement especially on very large disks.


Hi, not-ancient versions of Linux MD RAID cope with these issues via two 
different means:

- a write bitmap to track dirty disk regions;
- an embedded bad list sector to remap such sectors when appropriate 
(and without failing the entire array, if possible).


Still, splitting a disk into multiple slices has the specific advantage 
of using different RAID levels on different dataset (which can be very 
valuable in some cases).


Regards.

--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.da...@assyoma.it - i...@assyoma.it
GPG public key ID: FF5F32A8
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Upgrading system from non-RAID to RAID1

2023-01-12 Thread Simon Matter
> Hallo Simon,
>
>> Anyway, the splitting of large disks has additional advantages. Think of
>> what happens in case of a failure (power loss, kernel crash...). With
>> the
>> disk as one large chunk, the whole disk has to be resynced on restart
>> while with smaller segments only those which are marked as dirty have to
>> be resynced. This can make a bit difference.
>
> I am not sure if this is true. If a underlying disk fails, it will mark
> all partitions on that disk as dirty, so you will have to resync them all
> after replacing or readding the disk into the array.

No, I'm not talking about a complete disk failure, my example wasn't a
failure at all but a server problem like power loss, kernel crash and such
things. In this case only the segments which were not in sync at the time
of the crash will be resynced on restart, not the whole disk.

The same is, if a read error happens on one disk, only the partial segment
will lose redundancy and not the whole contents of the disk.

That's a huge improvement especially on very large disks.

Simon

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos