Re: [CentOS-docs] ACL for SpecialInterestGroup/Hyperscale
On Thu, Apr 1, 2021 at 4:19 PM Davide Cavalca via CentOS-docs < centos-docs@centos.org> wrote: > Hi, > > I tried adding MichelSalim to the ACL for > SpecialInterestGroup/Hyperscale, but it fails because "You can't change > ACLs on this page since you have no admin rights on it!". Could you > please add him, or even better give me admin right on the page so I can > maintain the ACL myself? My account is DavideCavalca. Thanks! > > Cheers > Davide > Hi Davide, You should have admin rights now. Please let us know if you need further assistance. Akemi ___ CentOS-docs mailing list CentOS-docs@centos.org https://lists.centos.org/mailman/listinfo/centos-docs
[CentOS-docs] ACL for SpecialInterestGroup/Hyperscale
Hi, I tried adding MichelSalim to the ACL for SpecialInterestGroup/Hyperscale, but it fails because "You can't change ACLs on this page since you have no admin rights on it!". Could you please add him, or even better give me admin right on the page so I can maintain the ACL myself? My account is DavideCavalca. Thanks! Cheers Davide ___ CentOS-docs mailing list CentOS-docs@centos.org https://lists.centos.org/mailman/listinfo/centos-docs
Re: [CentOS] Can't upgrade sssd-*
On Mar 26, 2021, at 7:08 AM, Warren Young wrote: > > Is anyone else getting this on dnf upgrade? > > [MIRROR] sssd-proxy-2.3.0-9.el8.x86_64.rpm: Interrupted by header callback: > Server reports Content-Length: 9937 but expected size is: 143980 The short reply size made me think to try a packet capture, and it turned out to be a message from the site’s “transparent” HTTP proxy, telling me that content’s blocked. Rather than fight with site IT over the block list, I have a new question: is there any plan for getting HTTPS-only updates in CentOS? Changing all “http” to “https” in my repo conf files just made the update stall, so I assume there are mirrors that are still HTTP-only. ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] KVM vs. incremental remote backups
> All relevant logging is centralised to a server cluster running Graylog. ... and, because I forgot to mention it: Yes, that server cluster has a "persistent data" device. Regards, Peter. ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] KVM vs. incremental remote backups
Hi Simon, > Whenever I read such things I'm wondering, what about things like log > files? Do you call them OS files or persistent data? How do you back'em up > then? I don't. All relevant logging is centralised to a server cluster running Graylog. Regards, Peter. ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Debuginfo for CentOS 8 Stream
On 3/29/21 1:28 PM, Matthew Saltzman wrote: > My CentOS 8 Stream installation is fully current. I did > >sudo debuginfo-install libgcc libstdc++ > > but the response I get is > >Could not find debuginfo package for the following installed packages: >libgcc-8.4.1-1.el8.x86_64, libstdc++-8.4.1-1.el8.x86_64 >Could not find debugsource package for the following installed >packages: libgcc-8.4.1-1.el8.x86_64, libstdc++-8.4.1-1.el8.x86_64 > > If I do > >sudo dnf --enablerepo=debuginfo debuginfo-install libgcc libstdc++ > > I get > >CentOS Stream 8 - Debuginfo 35 kB/s | 7.6 kB >00:00 >Errors during downloading metadata for repository 'debuginfo': > - Status code: 404 for >http://debuginfo.centos.org/8-stream/x86_64/repodata/repomd.xml (IP: >2620:52:3::12) >Error: Failed to download metadata for repo 'debuginfo': Cannot >download repomd.xml: Cannot download repodata/repomd.xml: All >mirrors were tried > > What do I need to do to get these debuginfo packages? > > TIA. > We have added storage and modified our staging scripts to output this info. I am going to try to push these later today to debuginfo.centos.org. If we have enough mirror space, tomorrow these should be available live. If we need more mirror space or something else happens and this can not go live .. I'll reply to this mail. Thanks, Johnny Hughes ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
[CentOS] SELINUX blocks procmail from executing perl script without logging
Hi, I'm upgrading our request tracker from Centos 7 to 8 and found some unexpected SELINUX issues with procmail. Even after I create a policy which allows all denied operations, procmail is still not allowed to run a perl script (in my case rt-mailgate). I get the following error in the procmail log: "Can't open perl script "/opt/rt5/bin/rt-mailgate": Permission denied" but I have no denied audit entry in /var/log/audit/audit.log. If I set selinux to permissive, everything works fine. Any idea how to debug this? Best regards, Radu ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] KVM vs. incremental remote backups
> Hi Niki, > > I'm using a similar approach like Stephen's, but with a kink. > > * Kickstart all machines from a couple of ISOs, depending on the > requirements (the Kickstart process is controlled by Ansible) > * Machines that have persistent data (which make up about 50% in average) > have at least two virtual disk devices: The one for the OS (which gets > overwritten by Kickstart when a machine is re-created), and another one > for persistent data (which Kickstart doesn't touch) > * Ansible sets up everything on the base server Kickstart provides, > starting from basic OS hardening, authentication and ending with > monitoring and backup of the data volume > * Backup is done via Bareos to a redundant storage server > > That way I can reinitialise a VM at any time without having to care for > the persistent data in most cases. If persistent data need to be restored > as well, Bareos can handle that as soon as the machine has been set up via > Ansible. OS files are never backed up at all. Whenever I read such things I'm wondering, what about things like log files? Do you call them OS files or persistent data? How do you back'em up then? Regards, Simon ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] KVM vs. incremental remote backups
Hi Niki, I'm using a similar approach like Stephen's, but with a kink. * Kickstart all machines from a couple of ISOs, depending on the requirements (the Kickstart process is controlled by Ansible) * Machines that have persistent data (which make up about 50% in average) have at least two virtual disk devices: The one for the OS (which gets overwritten by Kickstart when a machine is re-created), and another one for persistent data (which Kickstart doesn't touch) * Ansible sets up everything on the base server Kickstart provides, starting from basic OS hardening, authentication and ending with monitoring and backup of the data volume * Backup is done via Bareos to a redundant storage server That way I can reinitialise a VM at any time without having to care for the persistent data in most cases. If persistent data need to be restored as well, Bareos can handle that as soon as the machine has been set up via Ansible. OS files are never backed up at all. An improvement I'm planning to look into is moving from Kickstart to Terraform for the provisioning of the base machines. Currently it takes me about 10 minutes to recreate a broken VM provided the persistent data is left intact. Cheers, Peter. > On 31. Mar 2021, at 14:41, Nicolas Kovacs wrote: > > Hi, > > Up until recently I've hosted all my stuff (web & mail) on a handful of bare > metal servers. Web applications (WordPress, OwnCloud, Dolibarr, GEPI, > Roundcube) as well as mail and a few other things were hosted mostly on one > big > machine. > > Backups for this setup were done using Rsnapshot, a nifty utility that > combines > Rsync over SSH and hard links to make incremental backups. > > This approach has become problematic, for several reasons. First, web > applications have increasingly specific and sometimes mutually exclusive > requirements. And second, last month I had a server crash, and even though I > had backups for everything, this meant quite some offline time. > > So I've opted to go for KVM-based solutions, with everything split up over a > series of KVM guests. I wrapped my head around KVM, played around with it (a > lot) and now I'm more or less ready to go. > > One detail is nagging me though: backups. > > Let's say I have one VM that handles only DNS (base installation + BIND) and > one other VM that handles mail (base installation + Postfix + Dovecot). > > Under the hood that's two QCOW2 images stored in /var/lib/libvirt/images. > > With the old "bare metal" approach I could perform remote backups using Rsync, > so only the difference between two backups would get transferred over the > network. Now with KVM images it looks like every day I have to transfer the > whole image again. As soon as some images have lots of data on them (say, 100 > GB for a small OwnCloud server), this quickly becomes unmanageable. > > I googled around quite some time for "KVM backup best practices" and was a bit > puzzled to find many folks asking the same question and no real answer, at > least not without having to jump through burning loops. > > Any suggestions ? > > Niki > > -- > Microlinux - Solutions informatiques durables > 7, place de l'église - 30730 Montpezat > Site : https://www.microlinux.fr > Blog : https://blog.microlinux.fr > Mail : i...@microlinux.fr > Tél. : 04 66 63 10 32 > Mob. : 06 51 80 12 12 > ___ > CentOS mailing list > CentOS@centos.org > https://lists.centos.org/mailman/listinfo/centos ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos