Re: [CentOS] remote disk decryption on centos?

2021-03-14 Thread Gordon Messmer

On 3/12/21 1:51 PM, ept8e...@secmail.pro wrote:

Hi I was reading about how unlock encrypted root partition from remote
(unattended). I'd like asking what is compatible way for this in centos
and commonly used by administrators?



What's your threat model?  Are you trying to protect the system from 
physical theft, or are you trying to make sure the disks aren't readable 
when they're retired or fail?


For most purposes, I recommend enrolling the disk with the TPM2 chip, so 
that disks can be unlocked at boot without human intervention.  If theft 
is a concern, you'd need to ensure that the bootloader requires a 
password, and that the firmware boots only from the internal disk 
without a password:


    clevis luks bind -d /dev/VOLUME tpm2 '{"pcr_ids":"7"}'

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS-7-x86_64-dvd-2009.iso is too big for DVD blanks

2021-03-14 Thread Robert G. (Doc) Savage via CentOS
On Sun, 2021-03-14 at 21:31 -0400, John Plemons wrote:
> Sounds like you need to use a dual layer DVD disc, it is double the 
> capacity.

John,

Wrong answer. The server's optical drive doesn't support double-layer
disks. The CentOS developers made a mistake on their DVD iso, and they
need to fix it.

--Doc
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS-7-x86_64-dvd-2009.iso is too big for DVD blanks

2021-03-14 Thread Valeri Galtsev


> On Mar 14, 2021, at 8:36 PM, Valeri Galtsev  wrote:
> 
> 
> 
>> On Mar 14, 2021, at 8:13 PM, Robert G. (Doc) Savage via CentOS 
>>  wrote:
>> 
>> I need help from someone experienced with the CentOS bug tracking
>> system. I gotta say it is one of the most complicated and imposing
>> front ends I've ever seen. Could anyone familiar with it please file a
>> bug on my behalf? Particulars:
>> 
>> "CentOS 7.9.2009 DVD iso image too large"
>> 
>> ISO image: CentOS-7-x86_64-DVD-2009.iso 4.7GB raw CD image
>> Wed Nov  4 05:37:25 2020
>> Burners: Both K3B and Brasero
>> Media: Both DVD-R and DVD+R single-layer disks
>> 
>> iso image: 4,712,300,544 bytes
>> User Anthony F McInerney advises Wikipedia says
>> DVD-R capacity: 4,707,319,808 bytes (max)
>> 
>> I have tried burning this same iso image on two different machines: a
>> CentOS 7.9 server and a Fedora 33 laptop. Same failure on both.
>> 
>> We need to ask the developers to make a re-spin that's about 5MB
>> smaller. And before someone suggests it, the 2010-vintage server I'm
>> trying to install CentOS on does not support booting from a thumb
>> drive, so that option is not available.
> 
> Double layer DVD comes to my mind.
> 

Another thing came to my mind: you can try growisofs in command line with 
-overnurn option.

Valeri

> But I agree, it is annoying, and I’ve seen things like that, this is not the 
> first time I see alleged DVD image doesn’t fit into DVD it’s supposed to be 
> burned to.
> 
> Valeri
> 
>> Thanks,
>> 
>> --Doc Savage
>>Fairview Heights, IL
>> 
>> ___
>> CentOS mailing list
>> CentOS@centos.org
>> https://lists.centos.org/mailman/listinfo/centos
> 

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS-7-x86_64-dvd-2009.iso is too big for DVD blanks

2021-03-14 Thread Valeri Galtsev


> On Mar 14, 2021, at 8:13 PM, Robert G. (Doc) Savage via CentOS 
>  wrote:
> 
> I need help from someone experienced with the CentOS bug tracking
> system. I gotta say it is one of the most complicated and imposing
> front ends I've ever seen. Could anyone familiar with it please file a
> bug on my behalf? Particulars:
> 
> "CentOS 7.9.2009 DVD iso image too large"
> 
> ISO image: CentOS-7-x86_64-DVD-2009.iso 4.7GB raw CD image
> Wed Nov  4 05:37:25 2020
> Burners: Both K3B and Brasero
> Media: Both DVD-R and DVD+R single-layer disks
> 
> iso image: 4,712,300,544 bytes
> User Anthony F McInerney advises Wikipedia says
> DVD-R capacity: 4,707,319,808 bytes (max)
> 
> I have tried burning this same iso image on two different machines: a
> CentOS 7.9 server and a Fedora 33 laptop. Same failure on both.
> 
> We need to ask the developers to make a re-spin that's about 5MB
> smaller. And before someone suggests it, the 2010-vintage server I'm
> trying to install CentOS on does not support booting from a thumb
> drive, so that option is not available.

Double layer DVD comes to my mind.

But I agree, it is annoying, and I’ve seen things like that, this is not the 
first time I see alleged DVD image doesn’t fit into DVD it’s supposed to be 
burned to.

Valeri

> Thanks,
> 
> --Doc Savage
> Fairview Heights, IL
> 
> ___
> CentOS mailing list
> CentOS@centos.org
> https://lists.centos.org/mailman/listinfo/centos

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] CentOS-7-x86_64-dvd-2009.iso is too big for DVD blanks

2021-03-14 Thread John Plemons
Sounds like you need to use a dual layer DVD disc, it is double the 
capacity.


john


On 3/14/2021 9:13 PM, Robert G. (Doc) Savage via CentOS wrote:

I need help from someone experienced with the CentOS bug tracking
system. I gotta say it is one of the most complicated and imposing
front ends I've ever seen. Could anyone familiar with it please file a
bug on my behalf? Particulars:

"CentOS 7.9.2009 DVD iso image too large"

ISO image: CentOS-7-x86_64-DVD-2009.iso 4.7GB raw CD image
Wed Nov  4 05:37:25 2020
Burners: Both K3B and Brasero
Media: Both DVD-R and DVD+R single-layer disks

iso image: 4,712,300,544 bytes
User Anthony F McInerney advises Wikipedia says
DVD-R capacity: 4,707,319,808 bytes (max)

I have tried burning this same iso image on two different machines: a
CentOS 7.9 server and a Fedora 33 laptop. Same failure on both.

We need to ask the developers to make a re-spin that's about 5MB
smaller. And before someone suggests it, the 2010-vintage server I'm
trying to install CentOS on does not support booting from a thumb
drive, so that option is not available.

Thanks,

--Doc Savage
     Fairview Heights, IL

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos



--
This email has been checked for viruses by AVG.
https://www.avg.com

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


[CentOS] CentOS-7-x86_64-dvd-2009.iso is too big for DVD blanks

2021-03-14 Thread Robert G. (Doc) Savage via CentOS
I need help from someone experienced with the CentOS bug tracking
system. I gotta say it is one of the most complicated and imposing
front ends I've ever seen. Could anyone familiar with it please file a
bug on my behalf? Particulars:

"CentOS 7.9.2009 DVD iso image too large"

ISO image: CentOS-7-x86_64-DVD-2009.iso 4.7GB raw CD image
Wed Nov  4 05:37:25 2020
Burners: Both K3B and Brasero
Media: Both DVD-R and DVD+R single-layer disks

iso image: 4,712,300,544 bytes
User Anthony F McInerney advises Wikipedia says
DVD-R capacity: 4,707,319,808 bytes (max)

I have tried burning this same iso image on two different machines: a
CentOS 7.9 server and a Fedora 33 laptop. Same failure on both.

We need to ask the developers to make a re-spin that's about 5MB
smaller. And before someone suggests it, the 2010-vintage server I'm
trying to install CentOS on does not support booting from a thumb
drive, so that option is not available.

Thanks,

--Doc Savage
    Fairview Heights, IL

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Expand XFS filesystem on CentOS Linux release 8.2.2004 (Core)

2021-03-14 Thread Strahil Nikolov via CentOS
True.I wiped a VM this way , years ago.
Best Regards,Strahil Nikolov 
 
  On Sun, Mar 14, 2021 at 20:05, Simon Matter wrote:   
> I'm constantly using fdisk on GPT and everything has been fine.
> Best Regards,Strahil Nikolov

That's only true in recent times, because in the past fdisk didn't support
GPT at all. Back then you had to use tools like parted.

Simon

>
>
>  On Fri, Mar 12, 2021 at 15:30, Simon Matter
> wrote:  > Hi,
>>
>> Is there a way to expand xfs filesystem /dev/nvme0n1p2 which is 7.8G and
>> occupy the remaining free disk space of 60GB?
>>
>> [root@ip-10-0-0-218 centos]# df -hT --total
>> Filesystem    Type      Size  Used Avail Use% Mounted on
>> devtmpfs      devtmpfs  1.7G    0  1.7G  0% /dev
>> tmpfs          tmpfs    1.7G    0  1.7G  0% /dev/shm
>> tmpfs          tmpfs    1.7G  23M  1.7G  2% /run
>> tmpfs          tmpfs    1.7G    0  1.7G  0% /sys/fs/cgroup
>> */dev/nvme0n1p2 xfs      7.8G  7.0G  824M  90% /* >
>> expand /dev/nvme0n1p2 which is 7.8G and occupy the remaining free disk
>> space of 60GB.
>> /dev/nvme0n1p1 vfat      599M  6.4M  593M  2% /boot/efi
>> tmpfs          tmpfs    345M    0  345M  0% /run/user/1000
>> total          -          16G  7.0G  8.5G  46% -
>> [root@ip-10-0-0-218 centos]# fdisk -l
>> GPT PMBR size mismatch (20971519 != 125829119) will be corrected by
>> write.
>> The backup GPT table is not on the end of the device. This problem will
>> be
>> corrected by write.
>
> How did you end up in this situation? Did you copy the data from a smaller
> disk to this 60G disk?
>
>> *Disk /dev/nvme0n1: 60 GiB*, 64424509440 bytes, 125829120 sectors
>> Units: sectors of 1 * 512 = 512 bytes
>> Sector size (logical/physical): 512 bytes / 512 bytes
>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>> Disklabel type: gpt
>> Disk identifier: E97B9FFA-2C13-474E-A0E4-ABF1572CD20C
>>
>> Device            Start      End  Sectors  Size Type
>> /dev/nvme0n1p1    2048  1230847  1228800  600M EFI System
>> /dev/nvme0n1p2  1230848 17512447 16281600  7.8G Linux filesystem
>> /dev/nvme0n1p3 17512448 17514495    2048    1M BIOS boot
>
> Looks like you could move p3 to the end of the disk and then enlarge p2
> and then grow the XFS on it.
>
> I'm not sure it's a good idea to use fdisk on a GPT disk. At least in the
> past this wasn't supported and I don't know how much has changed here. I
> didn't touch a lot of GPT systems yet, and where I did I felt frightened
> by the whole EFI stuff :)
>
> Regards,
> Simon
>
> ___
> CentOS mailing list
> CentOS@centos.org
> https://lists.centos.org/mailman/listinfo/centos
>
>


  
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Expand XFS filesystem on CentOS Linux release 8.2.2004 (Core)

2021-03-14 Thread Scott Robbins
On Sun, Mar 14, 2021 at 07:05:37PM +0100, Simon Matter wrote:
> > I'm constantly using fdisk on GPT and everything has been fine.
> > Best Regards,Strahil Nikolov
> 
> That's only true in recent times, because in the past fdisk didn't support
> GPT at all. Back then you had to use tools like parted.
> 

I've only been playing with GPT and UEFI on CentOS for a little while. I'd
read that fdisk wasn't a good idea, but it turns out that CentOS has gdisk,
which is very similar and cgdisk which is like cfdisk. 

I've been using them with no problem. I did, in a laptop where I was
running several distributions for fun, use gparted to expand a partition,
and that also worked without problem.  


-- 
Scott Robbins
PGP keyID EB3467D6
( 1B48 077D 66F6 9DB0 FDC2 A409 FA54 EB34 67D6 )
gpg --keyserver pgp.mit.edu --recv-keys EB3467D6

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Bare metal vs. virtualization: Proxmox + Ceph + CentOS ?

2021-03-14 Thread Simon Matter
> Am 14.03.21 um 07:13 schrieb Nicolas Kovacs:
>>
>> Now here’s the problem: it took me three and a half days of intense work
>> to
>> restore everything and get everything running again. Three and a half
>> days of
>> downtime is quite a stretch.
>>
>
> What was the real problem? Why did you need days to restore
> from backups? Maybe the new solution is attached here?

I thought the same. What happened to your previous hardware?

First, using RAID1-6 you should not lose your storage so easily. So what
can happen:

a) hardware dies, disks are still fine -> move disks to new hardware and
only adjust settings for new hardware.
b) one disk dies, means no damage but need to replace disk.
c) hardware dies completely with all disks -> new replacement hardware
required.

a and b can usually be handled quite fast, possibly have replacement parts
ready, c really happens almost never, really.

Then, why did it take so long to get up and running again?

One important thing to keep in mind is that trasferring data from a backup
can always take a lot time if there is lot of data involved. Restoring
multiple terabytes usually takes more time than one might expect. At least
me I usually forget that in my daily work and assume things should go fast
with modern hardware. That's not always true with todays storage sizes.

Regards,
Simo

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Expand XFS filesystem on CentOS Linux release 8.2.2004 (Core)

2021-03-14 Thread Simon Matter
> I'm constantly using fdisk on GPT and everything has been fine.
> Best Regards,Strahil Nikolov

That's only true in recent times, because in the past fdisk didn't support
GPT at all. Back then you had to use tools like parted.

Simon

>
>
>   On Fri, Mar 12, 2021 at 15:30, Simon Matter
> wrote:   > Hi,
>>
>> Is there a way to expand xfs filesystem /dev/nvme0n1p2 which is 7.8G and
>> occupy the remaining free disk space of 60GB?
>>
>> [root@ip-10-0-0-218 centos]# df -hT --total
>> Filesystem    Type      Size  Used Avail Use% Mounted on
>> devtmpfs      devtmpfs  1.7G    0  1.7G  0% /dev
>> tmpfs          tmpfs    1.7G    0  1.7G  0% /dev/shm
>> tmpfs          tmpfs    1.7G  23M  1.7G  2% /run
>> tmpfs          tmpfs    1.7G    0  1.7G  0% /sys/fs/cgroup
>> */dev/nvme0n1p2 xfs      7.8G  7.0G  824M  90% /* >
>> expand /dev/nvme0n1p2 which is 7.8G and occupy the remaining free disk
>> space of 60GB.
>> /dev/nvme0n1p1 vfat      599M  6.4M  593M  2% /boot/efi
>> tmpfs          tmpfs    345M    0  345M  0% /run/user/1000
>> total          -          16G  7.0G  8.5G  46% -
>> [root@ip-10-0-0-218 centos]# fdisk -l
>> GPT PMBR size mismatch (20971519 != 125829119) will be corrected by
>> write.
>> The backup GPT table is not on the end of the device. This problem will
>> be
>> corrected by write.
>
> How did you end up in this situation? Did you copy the data from a smaller
> disk to this 60G disk?
>
>> *Disk /dev/nvme0n1: 60 GiB*, 64424509440 bytes, 125829120 sectors
>> Units: sectors of 1 * 512 = 512 bytes
>> Sector size (logical/physical): 512 bytes / 512 bytes
>> I/O size (minimum/optimal): 512 bytes / 512 bytes
>> Disklabel type: gpt
>> Disk identifier: E97B9FFA-2C13-474E-A0E4-ABF1572CD20C
>>
>> Device            Start      End  Sectors  Size Type
>> /dev/nvme0n1p1    2048  1230847  1228800  600M EFI System
>> /dev/nvme0n1p2  1230848 17512447 16281600  7.8G Linux filesystem
>> /dev/nvme0n1p3 17512448 17514495    2048    1M BIOS boot
>
> Looks like you could move p3 to the end of the disk and then enlarge p2
> and then grow the XFS on it.
>
> I'm not sure it's a good idea to use fdisk on a GPT disk. At least in the
> past this wasn't supported and I don't know how much has changed here. I
> didn't touch a lot of GPT systems yet, and where I did I felt frightened
> by the whole EFI stuff :)
>
> Regards,
> Simon
>
> ___
> CentOS mailing list
> CentOS@centos.org
> https://lists.centos.org/mailman/listinfo/centos
>
>


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Bare metal vs. virtualization: Proxmox + Ceph + CentOS ?

2021-03-14 Thread Valeri Galtsev


> On Mar 14, 2021, at 5:42 AM, Leon Fauster via CentOS  
> wrote:
> 
> Am 14.03.21 um 07:13 schrieb Nicolas Kovacs:
>> 
>> Now here’s the problem: it took me three and a half days of intense work to
>> restore everything and get everything running again. Three and a half days of
>> downtime is quite a stretch.
> 
> What was the real problem? Why did you need days to restore
> from backups? Maybe the new solution is attached here?

I would second what Leon said. Even though my backup is different (bareos), 
still my estimate of full restore to different machine would be: installation 
of new system (about 30 min at most), then restore of everything from bareos 
backup, which will depend on total size of everything to restore, the 
bottleneck will be 1 Gbps network connection. And I do not think my FreeBSD 
boxes with dozens of jails are much simpler than Nicolas's front end machine. 
Restore from backup is just restore from backup.

But under some circumstances that can be even faster. I once had quite 
important machine died (system board). But I had different hardware running 
less critical stuff, which accepted the drives from failed machine plus RAID 
card from it, after boot the only thing was necessary to address was network 
configuration (due to different device names). (both boxes have 8 port sata/sas 
backplane, all filesystems of machines live on hardware RAID-6…)

As far as distributed file systems are concerned, they are nice (but with seph 
you will need to have all boxes with the same size of storage). However, it is 
more expensive. Cheaper guy - I - goes with hardware RAID, and spare machine 
(if necessary that is: in a manner of grabbing less important box’s hardware to 
stick drives from failed into it).

Virtualization: in our shop (we use FreeBSD jails), it provides more security 
and flexibility. As far as “disaster recovery” is concerned, using jails 
doesn’t affect it in any way. But often helps to avoid disasters created by 
sudden conflict between packages, as only inseparable components are run in the 
same jail, so actual server is a bunch of jails each running one or two 
services, which gives extra robustness. And if A depends on C and B depends on 
D, and if  C and D conflict with each other, that doesn’t matter when A lives 
in one jail, and B lives in another.

One example of flexibility I just had another week: I migrated the box with 
couple of dozens of jail (most of them are independent servers with different 
IPs, “virtualized” in the manner they run if jails on some machine). To move 
the whole everything to another machine will take long, noticeable downtime, 
but moving jails one at a time made downtime of each as short as mere reboot 
cause. (In general, any sort of virtualization gives you that).

I hope, this helps.

Valeri

> --
> Leon
> 
> 
> ___
> CentOS mailing list
> CentOS@centos.org
> https://lists.centos.org/mailman/listinfo/centos

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Bare metal vs. virtualization: Proxmox + Ceph + CentOS ?

2021-03-14 Thread Mauricio Tavares
How many extra servers can you add to your setup? If I were in your
shoes, I would consider building a file server/NAS with fast
connection to your server(s). Then share the data to your services to
the server (NFS?), export the disk (iscsi) or some combination of
both. I hope someone can correct me but I think postfix has issues
with use accounts in NFS partitions.

Next step is building your web/mail/etc servers -- be them as VMs or
as all in the same baremetal -- as thin as possible so you can recover
that quickly (ansible?), mount data fileshares, and off you go. If you
are going the vm route you could ether save snapshots or build one of
those setups with two servers so in case one goes boink the other
takes over. This is also good for upgrading one of the VM servers: do
them on different days so you can see if there are problems.

If you cannot have more than one server, do run VMs and then put them
in a second set of disks so something happens to boot disk you can
recover.

On Sun, Mar 14, 2021 at 1:13 AM Nicolas Kovacs  wrote:
>
> Hi,
>
> Last week I had a disaster which took me a few unnerving days to repair. My
> main Internet-facing server is a bare-metal installation with CentOS 7. It
> hosts four dozen web sites (or web applications) based on WordPress, Dolibarr,
> OwnCloud, GEPI, and quite a number of mail accounts for ten different domains.
> On sunday afternoon this machine had a hardware failure and proved to be
> unrecoverable.
>
> The good news is, I always have backups of everything. In that case, I have a
> dedicated backup server (in a different datacenter in a different country). 
> I’m
> using Rsnapshot for incremental backups, so I had all data: websites, mail
> accounts, database dumps, configurations, etc.
>
> Now here’s the problem: it took me three and a half days of intense work to
> restore everything and get everything running again. Three and a half days of
> downtime is quite a stretch.
>
> As far as I understand, my mistake was to use a bare-metal installation and 
> not
> a virtualized solution where I could simply restore a snapshot of a VM. 
> Correct
> me if I’m wrong.
>
> Now I’m doing a lot of thinking and searching. Proxmox and Ceph look quite
> promising. From what I can tell, the idea is not to use a big server but a
> cluster of many small servers, and aggregate them like you would do with hard
> disks in a RAID 10 array for example, only you would do this for the whole
> system. And then install one or several CentOS 7 VMs on top of this setup.
>
> Any advice from the pros before I dive head first into the  documentation?
>
> Cheers from the sunny South of France,
>
> Niki
>
> --
> Microlinux - Solutions informatiques durables
> 7, place de l'église - 30730 Montpezat
> Site : https://www.microlinux.fr
> Blog : https://blog.microlinux.fr
> Mail : i...@microlinux.fr
> Tél. : 04 66 63 10 32
> Mob. : 06 51 80 12 12
> ___
> CentOS mailing list
> CentOS@centos.org
> https://lists.centos.org/mailman/listinfo/centos
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Expand XFS filesystem on CentOS Linux release 8.2.2004 (Core)

2021-03-14 Thread Strahil Nikolov via CentOS
I'm constantly using fdisk on GPT and everything has been fine.
Best Regards,Strahil Nikolov
 
 
  On Fri, Mar 12, 2021 at 15:30, Simon Matter wrote:   
> Hi,
>
> Is there a way to expand xfs filesystem /dev/nvme0n1p2 which is 7.8G and
> occupy the remaining free disk space of 60GB?
>
> [root@ip-10-0-0-218 centos]# df -hT --total
> Filesystem    Type      Size  Used Avail Use% Mounted on
> devtmpfs      devtmpfs  1.7G    0  1.7G  0% /dev
> tmpfs          tmpfs    1.7G    0  1.7G  0% /dev/shm
> tmpfs          tmpfs    1.7G  23M  1.7G  2% /run
> tmpfs          tmpfs    1.7G    0  1.7G  0% /sys/fs/cgroup
> */dev/nvme0n1p2 xfs      7.8G  7.0G  824M  90% /* >
> expand /dev/nvme0n1p2 which is 7.8G and occupy the remaining free disk
> space of 60GB.
> /dev/nvme0n1p1 vfat      599M  6.4M  593M  2% /boot/efi
> tmpfs          tmpfs    345M    0  345M  0% /run/user/1000
> total          -          16G  7.0G  8.5G  46% -
> [root@ip-10-0-0-218 centos]# fdisk -l
> GPT PMBR size mismatch (20971519 != 125829119) will be corrected by write.
> The backup GPT table is not on the end of the device. This problem will be
> corrected by write.

How did you end up in this situation? Did you copy the data from a smaller
disk to this 60G disk?

> *Disk /dev/nvme0n1: 60 GiB*, 64424509440 bytes, 125829120 sectors
> Units: sectors of 1 * 512 = 512 bytes
> Sector size (logical/physical): 512 bytes / 512 bytes
> I/O size (minimum/optimal): 512 bytes / 512 bytes
> Disklabel type: gpt
> Disk identifier: E97B9FFA-2C13-474E-A0E4-ABF1572CD20C
>
> Device            Start      End  Sectors  Size Type
> /dev/nvme0n1p1    2048  1230847  1228800  600M EFI System
> /dev/nvme0n1p2  1230848 17512447 16281600  7.8G Linux filesystem
> /dev/nvme0n1p3 17512448 17514495    2048    1M BIOS boot

Looks like you could move p3 to the end of the disk and then enlarge p2
and then grow the XFS on it.

I'm not sure it's a good idea to use fdisk on a GPT disk. At least in the
past this wasn't supported and I don't know how much has changed here. I
didn't touch a lot of GPT systems yet, and where I did I felt frightened
by the whole EFI stuff :)

Regards,
Simon

___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos
  
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Bare metal vs. virtualization: Proxmox + Ceph + CentOS ?

2021-03-14 Thread Leon Fauster via CentOS

Am 14.03.21 um 07:13 schrieb Nicolas Kovacs:


Now here’s the problem: it took me three and a half days of intense work to
restore everything and get everything running again. Three and a half days of
downtime is quite a stretch.



What was the real problem? Why did you need days to restore
from backups? Maybe the new solution is attached here?

--
Leon


___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Dual WAN on EL8 desktop.

2021-03-14 Thread Gordon Messmer
On Mon, Feb 15, 2021 at 10:30 PM Thomas Stephen Lee 
wrote:

> What is ideal is the bandwidth of two connections and half bandwidth
> when one link is down.


That may not be *generally* possible.  You can load-balance your network
streams (connections), so that you'll utilize the bandwidth of two physical
links in sum, but each individual network stream is going to traverse just
one physical link, and will never be faster than whichever link is selected
for that stream when it is initiated.
___
CentOS mailing list
CentOS@centos.org
https://lists.centos.org/mailman/listinfo/centos