Hi Phil,
Not sure if you've had time to look at this? As mentioned middleware, like
docker-ce, is preventing us from moving to el8.
Thanks
Dan
On Fri, Sep 11, 2020 at 10:33 PM Dedoep wrote:
> Hi Phil, ok that's great thanks.
> I have a colleague working through vroc/fake raid issues we're
At 01:34 PM 9/14/2020, you wrote:
what if you just dd the first 1GB of the disk and the last GB of the disk
(the last because of RAID signatures of some controllers that write to the
end of the disk)
Look at this article and modify accordingly
I've never run into a system yet where using dd to write zeros on the first
few megabytes didn't completely wipe the disk as far as the OS and existing
file systems are concerned..
dd if=/dev/zero of=/dev/sde bs=65536 count=1024
___
CentOS mailing list
At 02:36 PM 9/14/2020, you wrote:
On 2020-09-14 16:52, Robert Heller wrote:
At Mon, 14 Sep 2020 13:14:44 -0700 CentOS mailing list
wrote:
Folks
I've encountered situations where I want to reuse a hard-drive. I do
If it is a Seagate, don't bother. They have the highest failure
rate in
On 2020-09-14 16:52, Robert Heller wrote:
At Mon, 14 Sep 2020 13:14:44 -0700 CentOS mailing list
wrote:
Folks
I've encountered situations where I want to reuse a hard-drive. I do
If it is a Seagate, don't bother. They have the highest failure rate in
the industry.
Look at the SMART
At Mon, 14 Sep 2020 13:14:44 -0700 CentOS mailing list
wrote:
>
> Folks
>
> I've encountered situations where I want to reuse a hard-drive. I do
> not want to preserve anything on the drive, and I'm not concerned
> about 'securely erasing' old content. I just want to be able to
> define
what if you just dd the first 1GB of the disk and the last GB of the disk
(the last because of RAID signatures of some controllers that write to the
end of the disk)
Look at this article and modify accordingly
On Mon, Sep 14, 2020 at 3:18 PM david wrote:
> I've tried erasing the first megabyte of the disk, but there are ZFS
> or LVM structures that get in the way. So, does anyone have an
> efficient way to erase structures from a disk such that it can be reused?
>
GPT for sure has backup metadata on
On 9/14/20 1:14 PM, david wrote:
I've tried erasing the first megabyte of the disk, but there are ZFS
or LVM structures that get in the way. So, does anyone have an
efficient way to erase structures from a disk such that it can be reused?
Use "wipefs -a" on any partition (or raw disk)
Folks
I've encountered situations where I want to reuse a hard-drive. I do
not want to preserve anything on the drive, and I'm not concerned
about 'securely erasing' old content. I just want to be able to
define it as an Physical Volume (in a logical volume set), or make it
a ZFS disk, or
CentOS Errata and Security Advisory 2020:3617 Important
Upstream details at : https://access.redhat.com/errata/RHSA-2020:3617
The following updated files have been uploaded and are currently
syncing to the mirrors: ( sha256sum Filename )
x86_64:
CentOS Errata and Security Advisory 2020:3631 Important
Upstream details at : https://access.redhat.com/errata/RHSA-2020:3631
The following updated files have been uploaded and are currently
syncing to the mirrors: ( sha256sum Filename )
x86_64:
CentOS Errata and Security Advisory 2020:3643 Important
Upstream details at : https://access.redhat.com/errata/RHSA-2020:3643
The following updated files have been uploaded and are currently
syncing to the mirrors: ( sha256sum Filename )
i386:
Make sure that the interface that you are bridging to is not a wireless
interface otherwise it won't work. Very deep in the documentation about setting
up KVM there is a warning about that it's something about how they initiate
connections on a wireless interface is different than on a wired
Dear team
The auditd log for NETFILTER_PKT event does not contain the src port ,
desination port , in and out interface .
Has it been removed permanently (
https://patchwork.kernel.org/patch/9638183/)
or can it be enabled by some configuration by auditctl ?
centos version : CentOS
Le 13/09/2020 à 14:12, m...@tdiehl.org a écrit :
> I too see this regularly. yum clean metadata stops it for a while. I am not
> sure if it is a problem with the way epel metadata is generated or just out
> of date mirrors. I occasionally see it with other repos but it happens with
> epel far more
16 matches
Mail list logo