On Fri, 13 Jul 2018, Michael Hennebry wrote:
On Thu, 12 Jul 2018, Michael Hennebry wrote:
On Thu, 12 Jul 2018, Pete Biggs wrote:
For some reason you say you disliked Gnome - but does Gnome show issues
(they use the same video driver)?
No, neither gnome nor gnome-classic.
The black tape
Libraries: do they look ok
[root@centos clamav]# ldd $(which freshclam)
linux-gate.so.1 => (0x00529000)
libclamav.so.7 => /usr/lib/libclamav.so.7 (0x00bc5000)
libxml2.so.2 => /usr/lib/libxml2.so.2 (0x00124000)
libbz2.so.1 => /lib/libbz2.so.1 (0x04906000)
I fixed the country code issue but that did not resolve the problem.
I also removed all files in /var/lib/clamav and reran freshclam (without
rebooting), that also did
not fix the problem.
Jay
> Am 15.07.2018 um 00:13 schrieb Jay Hart:
>> ClamAV update process started at Sat Jul 14 15:10:48
Am 15.07.2018 um 00:13 schrieb Jay Hart:
ClamAV update process started at Sat Jul 14 15:10:48 2018
Using IPv6 aware code
Querying current.cvd.clamav.net
TTL: 1232
Software version from DNS: 0.100.1
WARNING: Your ClamAV installation is OUTDATED!
WARNING: Local version: 0.100.0 Recommended
Hello all,
Been having an issue today that I can't seem to solve, so reaching out to
others much more
knowledgeable for help/advice/assistance.
I ran the software update this morning and installed 134 packages, clamd was
one of the packages.
Upon completion of the update, I needed to reboot
On Sat, Jul 14, 2018 at 3:11 PM, Henry Finucane wrote:
> On Sat, Jul 14, 2018 at 5:22 AM Nico Kadel-Garcia wrote:
>> See above. Also, the base CentOS 7 3.10.0 kernel is becoming a bit
>> dated: it's 5 years old now. If you have time: can you set up a
>> smaller instance, do kernel updates on top
On Sat, Jul 14, 2018 at 5:22 AM Nico Kadel-Garcia wrote:
> See above. Also, the base CentOS 7 3.10.0 kernel is becoming a bit
> dated: it's 5 years old now. If you have time: can you set up a
> smaller instance, do kernel updates on top of a CentOs 7 AMI, and see
> if *that* AMI is compatible
/dev/lvm_pool/lvol001 and /dev/mapper/lvm_pool-lvol001 work with kernel 514.
they don't work with kernel 862.
the googling continues . . .
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos
On Sat, Jul 14, 2018 at 2:15 PM Tony Schreiner wrote:
> I don't have an answer to why kernel 514 is not booting,
> but what I was trying to say is:
>
> /dev/lvm_pool/lvol001
> and
> /dev/mapper/lvm_pool-lvol001
> are both symlinks to the same /dev/dm-X device file.
> You can use either name, but
On Sat, Jul 14, 2018 at 2:02 PM Mike <1100...@gmail.com> wrote:
> On Sat, Jul 14, 2018 at 1:57 PM Tony Schreiner
> wrote:
> >
> > >
> > > Is that first entry /dev/mapper/lvol001 right?
> > I'd expect /dev/mapper/lvm_pool-lvo001
>
> ssm list shows -
>
> /dev/lvm_pool/lvol001
>
> When I place
On Sat, Jul 14, 2018 at 1:57 PM Tony Schreiner wrote:
>
> >
> > Is that first entry /dev/mapper/lvol001 right?
> I'd expect /dev/mapper/lvm_pool-lvo001
ssm list shows -
/dev/lvm_pool/lvol001
When I place /dev/lvm_pool/lvol001 into /etc/fstab the computer will
boot using kernel 514.
Kernel 862
Tried --
umount -t xfs /mnt/data
vgchange -a n lvm_pool
vgexport lvm_pool
vgimport lvm_pool
Rebooted and kernel 862 still panics/hangs.
Can boot into kernel 514.
On Sat, Jul 14, 2018 at 1:35 PM Mike <1100...@gmail.com> wrote:
>
> When I change /etc/fstab from /dev/mapper/lvol001 to
>
On Sat, Jul 14, 2018 at 1:36 PM Mike <1100...@gmail.com> wrote:
> When I change /etc/fstab from /dev/mapper/lvol001 to
> /dev/lvm_pool/lvol001, kernel 3.10.0-514 will boot.
>
> Kernel 3.10.0-862 hangs and will not boot.
> On Sat, Jul 14, 2018 at 1:20 PM Mike <1100...@gmail.com> wrote:
>
> Is that
When I change /etc/fstab from /dev/mapper/lvol001 to
/dev/lvm_pool/lvol001, kernel 3.10.0-514 will boot.
Kernel 3.10.0-862 hangs and will not boot.
On Sat, Jul 14, 2018 at 1:20 PM Mike <1100...@gmail.com> wrote:
>
> Maybe not a good assumption afterall --
>
> I can no longer boot using kernel
Maybe not a good assumption afterall --
I can no longer boot using kernel 3.10.0-514 or 3.10.0-862.
boot.log shows:
Dependency failed for /mnt/data
Dependency failed for Local File Systems
Dependency failed for Mark the need to relabel after reboot.
Dependency failed for Migrate local SELinux
I did the following test:
###
1.
Computer with Centos 7.5 installed on hard drive /dev/sda.
Added two hard drives to the computer: /dev/sdb and /dev/sdc.
Created a new logical volume in RAID-1 using RedHat System Storage Manager:
ssm create --fstype
It seems, c5d.9xlarge and c5d.18xlarge are excluded by intention, already
grayed out in AWS instance start console.
Other providers (RH) enable them too.
Thanks
-Ursprüngliche Nachricht-
Von: CentOS-virt Im Auftrag von Nico
Kadel-Garcia
Gesendet: Samstag, 14. Juli 2018 14:22
An:
On Sat, Jul 14, 2018 at 7:41 AM, Jens-Uwe Schluessler
wrote:
> Hi,
>
> why are larger AWS instances c5d.9xlarge and c5d.18xlarge (NVMe SSD
> attached) NOT supported by Centos7 AMI,
It wouldn't be the first time. I had problems with the i3 instances
when they first came out, and I've been dealing
Send CentOS-announce mailing list submissions to
centos-annou...@centos.org
To subscribe or unsubscribe via the World Wide Web, visit
https://lists.centos.org/mailman/listinfo/centos-announce
or, via email, send a message with subject or body 'help' to
Hi,
why are larger AWS instances c5d.9xlarge and c5d.18xlarge (NVMe SSD attached)
NOT supported by Centos7 AMI,
while smaller instances (e.g. c5d.4xlarge) are supported?
Also regular c5.9/18xlarge are supported.
Thanks, Jens-Uwe
Jens-Uwe Schlüßler
20 matches
Mail list logo