Re: [CentOS] In place upgrade of RHEL 8 Beta to CentOS 8?
On 4/8/19 3:20 PM, Benjamin Smith wrote: > I'm about to rebuild a server, currently running CentOS 6. If I have to do an > OS reinstall, my intention is to upgrade, as it's the oldest OS server under > my purview. As this server is pretty low visibility, I'd like to see if I can > start using EL 8 instead of 7.x. > > In the past, I was able to switch between different OS variants by simply > changing out the yum.d files; EG: RHEL 6.x becomes CentOS 6.x by replacing a > single RPM and doing a `yum -y clean all; yum -y update` without issue. > > How likely is it that similar functionality will exist switching from RHEL 8 > Beta to CentOS 8 final? Google pounding provided little info. I couldn't even > find useful information for the transition from RHEL 7 Beta. > There will not be any plan to do that, no. Nor could you upgrade from RHEL-8 beta to RHEL-8. They just don't build it with that in mind. As Smooge said .. it might be possible. But the whole point of the beta is to allow for design changes. The full package set was likely not completely set, so some things could be removed or added and a bunch of manual removals, re-installs would be required. Different libraries may be linked. Etc, etc. I can't see almost any circumstance where I would recommend doing this. signature.asc Description: OpenPGP digital signature ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
[CentOS] CentOS Dojo at DevConf.US: CFP and Registration open
TL;DR: CentOS Dojo on August 14th at Boston University. CFP and Registration now available at https://wiki.centos.org/Events/Dojo/DevConfUS2019 Hello, folks. We will once again be holding the CentOS Dojo at Boston University, on the day before DevConf.US 2019. The details of DevConf.US are at https://devconf.us/ and the details of the CentOS Dojo are at https://wiki.centos.org/Events/Dojo/DevConfUS2019 The call for presentations is now open, and will close May 1st. Registration is now open. It's free to attend, but we need an attendee count for planning purposes. You can see last years schedule - https://wiki.centos.org/Events/Dojo/DevConfUS2018 - for an example of what kind of presentations we might have. And we'd love to hear from you - your stories of running your infra on CentOS, the projects that you're working on, and interesting challenges you've had. Please do submit your talk proposals! Based on requests last year, we expect to do a lightning talks session, with 5-minute impromptu talks for those who don't have a full presentation. Hope to see you in Boston! -- Rich Bowen - rbo...@redhat.com @CentOSProject // @rbowen 859 351 9166 ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Kernel panic after removing SW RAID1 partitions, setting up ZFS.
> In article <6566355.ijnrhnp...@tesla.schoolpathways.com>, > Benjamin Smith wrote: >> System is CentOS 6 all up to date, previously had two drives in MD RAID >> configuration. >> >> md0: sda1/sdb1, 20 GB, OS / Partition >> md1: sda2/sdb2, 1 TB, data mounted as /home >> >> Installed kmod ZFS via yum, reboot, zpool works fine. Backed up the >> /home data >> 2x, then stopped the sd[ab]2 partition with: >> >> mdadm --stop /dev/md1; >> mdadm --zero-superblock /dev/sd[ab]1; > > Did you mean /dev/sd[ab]2 instead? > >> Removed /home in /etc/fstab. Used fdisk to set the partition type to gpt >> for >> sda2 and sdb2, then built *then destroyed* a ZFS mirror pool using the >> two >> partitions. >> >> Now the system won't boot, has a kernel panic. I'm remote, so I'll be >> going in >> tomorrow to see what's up. My assumption is that it has something to do >> with >> mdadm/RAID not being "fully removed". >> >> Any idea what I might have missed? > > I think it's because you clobbered md0 when you did --zero-superblock on > sd[ab]1 > instead of 2. > > Don't you love it when some things count from 0 and others from 1? That's really a problem but difficult to fix I guess. IMHO it's better to keep things the way they are as long as the solution is not really better than the old behavior. Maybe the new Linux Ethernet naming scheme can serve as a bad example if you ask me. But here, mdadm could have done better: --zero-superblock checks if the device contains a valid md superblock, but it fails to also check if the device belongs to a running md device :-( If it turns out that this is your problem, maybe you could ask the mdadm developers to improve it? Regards, Simon ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos
Re: [CentOS] Kernel panic after removing SW RAID1 partitions, setting up ZFS.
In article <6566355.ijnrhnp...@tesla.schoolpathways.com>, Benjamin Smith wrote: > System is CentOS 6 all up to date, previously had two drives in MD RAID > configuration. > > md0: sda1/sdb1, 20 GB, OS / Partition > md1: sda2/sdb2, 1 TB, data mounted as /home > > Installed kmod ZFS via yum, reboot, zpool works fine. Backed up the /home > data > 2x, then stopped the sd[ab]2 partition with: > > mdadm --stop /dev/md1; > mdadm --zero-superblock /dev/sd[ab]1; Did you mean /dev/sd[ab]2 instead? > Removed /home in /etc/fstab. Used fdisk to set the partition type to gpt for > sda2 and sdb2, then built *then destroyed* a ZFS mirror pool using the two > partitions. > > Now the system won't boot, has a kernel panic. I'm remote, so I'll be going > in > tomorrow to see what's up. My assumption is that it has something to do with > mdadm/RAID not being "fully removed". > > Any idea what I might have missed? I think it's because you clobbered md0 when you did --zero-superblock on sd[ab]1 instead of 2. Don't you love it when some things count from 0 and others from 1? Cheers Tony -- Tony Mountifield Work: t...@softins.co.uk - http://www.softins.co.uk Play: t...@mountifield.org - http://tony.mountifield.org ___ CentOS mailing list CentOS@centos.org https://lists.centos.org/mailman/listinfo/centos