Re: [PLUG] Change hard drive FS

2023-09-20 Thread wes
On Tue, Sep 19, 2023 at 2:41 PM John Jason Jordan  wrote:

> either way the installed Debian appears in the Grub list, but
> it still won't boot. I'd rather have it on the new 2TB drive rather
> than the three-year-old 1TB drive, but that probably also doesn't
> matter.
>
>
>
I bet if you removed the 1TB drive and then install to the 2TB drive, it'll
boot.

getting it to dual boot after you put the ubuntu drive back could be a
"fun" adventure but... it's a step

-wes


Re: [PLUG] Change hard drive FS

2023-09-19 Thread John Jason Jordan
I'm back in Ubuntu, following my ninth failed attempt to install Debian
12. But I should say at the beginning that, like Rich, I've used ext4
for many years and have never had a problem. My issues with installing
Debian 12 is that it won't boot because of a failure in setting up Grub.
And the root cause of that is because I am too dumb to properly follow
the instructions in the Debian installer. Xubuntu is on a Samsung
M.2 drive of 1TB, and I'm trying to install Debian on a new
Samsung M.2 drive of 2TB. I can easily tell which is which when they
give me the Samsung product name, but while setting up Grub it asks
'Install grub to your primary drive?' Well, which drive is that? Both
drives have a primary partition for / and a second logical partition
for /home. 

It probably doesn't matter. I've answered that question 'yes,' and
'no,' and either way the installed Debian appears in the Grub list, but
it still won't boot. I'd rather have it on the new 2TB drive rather
than the three-year-old 1TB drive, but that probably also doesn't
matter. 

And speaking of primary partitions, the Debian installer set the
boot flag for / on the new drive, but the / partition on the old drive
(which unfailingly boots Ubuntu) has no boot flag, at least according to
GParted. More stuff I don't understand.


Michael Ewan  dijo:

>I am glad you have not had any problems.  I have had the opposite
>experience with ext4 but never a problem with xfs, hence my suggestion.
>
>On Tue, Sep 19, 2023 at 1:25 PM Rich Shepard 
>wrote:
>
>> On Tue, 19 Sep 2023, Michael Ewan wrote:
>>  
>> > You will ultimately have problems with a corrupted file system
>> > with ext4, almost guaranteed.  Xfs is a much more robust file
>> > system but if you do  
>> not  
>> > trust it, then try zfs or btrfs.  
>>
>> Michael,
>>
>> I've used ext2, ext3, and ext4 with no issues on any of them. I'll
>> stay with
>> what's worked flawlessly with me since 1997.
>>
>> Thanks,
>>
>> Rich
>>
>>  


Re: [PLUG] Change hard drive FS

2023-09-19 Thread Rich Shepard

On Tue, 19 Sep 2023, Ben Koenig wrote:


Back to Rich's original question though you don't configure your
filesystem with fdisk. If you already have drives that are actively in use
then you can leave the partitions alone, and just reformat with mkfs.
Slackware also includes several /sbin/mkfs.* programs as front ends to
whatever filesystem you intend to use.


Ben,

I wrote too quickly. I'll use fdisk/cfdisk only to reformat the usb flash
drive, then use mkfs.ext4 after copying off the data.

Thanks for the reminder.

Regards,

Rich


Re: [PLUG] Change hard drive FS

2023-09-19 Thread Ben Koenig
I can't find a link to share at the moment, but I remember reading some 
comments from an interview with one of the EXT4 developers where he said that 
while there are some issues, EXT4 is extremely robust when it comes to 
recovering from data corruption. 

Basically he was saying that it tends to be pretty magically when replaying the 
journal, but still gets corrupted. This means that data gets corrupted for 
everybody, but depending on your use case the journal replay feature will 
either magically fix the problem or catastrophically fail. 

Back to Rich's original question though you don't configure your filesystem 
with fdisk. If you already have drives that are actively in use then you can 
leave the partitions alone, and just reformat with mkfs. Slackware also 
includes several /sbin/mkfs.* programs as front ends to whatever filesystem you 
intend to use. 

bash-5.1$ ls -l /sbin/mkfs*
-rwxr-xr-x 1 root root  14664 Feb 15  2022 /sbin/mkfs
-rwxr-xr-x 1 root root  35408 Feb 15  2022 /sbin/mkfs.bfs
-rwxr-xr-x 1 root root 532576 Jan 15  2022 /sbin/mkfs.btrfs
-rwxr-xr-x 1 root root  39480 Feb 15  2022 /sbin/mkfs.cramfs
-rwxr-xr-x 1 root root  35184 Nov 17  2021 /sbin/mkfs.exfat
lrwxrwxrwx 1 root root  6 Feb 18  2022 /sbin/mkfs.ext2 -> mke2fs
lrwxrwxrwx 1 root root  6 Feb 18  2022 /sbin/mkfs.ext3 -> mke2fs
lrwxrwxrwx 1 root root  6 Feb 18  2022 /sbin/mkfs.ext4 -> mke2fs
lrwxrwxrwx 1 root root  6 Feb 18  2022 /sbin/mkfs.ext4dev -> mke2fs
-rwxr-xr-x 1 root root  44144 Feb 13  2021 /sbin/mkfs.f2fs
-rwxr-xr-x 1 root root  56592 Feb 13  2021 /sbin/mkfs.fat
lrwxrwxrwx 1 root root  8 Feb 18  2022 /sbin/mkfs.jfs -> jfs_mkfs
-rwxr-xr-x 1 root root 109664 Feb 15  2022 /sbin/mkfs.minix
lrwxrwxrwx 1 root root  8 Feb 18  2022 /sbin/mkfs.msdos -> mkfs.fat
lrwxrwxrwx 1 root root 12 Jun 24 15:05 /sbin/mkfs.ntfs -> /sbin/mkntfs
lrwxrwxrwx 1 root root 10 Feb 18  2022 /sbin/mkfs.reiserfs -> mkreiserfs
lrwxrwxrwx 1 root root  8 Feb 18  2022 /sbin/mkfs.vfat -> mkfs.fat
-rwxr-xr-x 1 root root 486904 Aug 21  2021 /sbin/mkfs.xfs

See the manpages. You don't need fdisk unless you want to change parition 
sizes/settings. Once mkfs has been run and the new drive is mounted, linux 
should handle the translation between filesystems seamlessly. 
-Ben

--- Original Message ---
On Tuesday, September 19th, 2023 at 1:35 PM, Michael Ewan 
 wrote:


> I am glad you have not had any problems. I have had the opposite
> experience with ext4 but never a problem with xfs, hence my suggestion.
> 
> On Tue, Sep 19, 2023 at 1:25 PM Rich Shepard rshep...@appl-ecosys.com
> 
> wrote:
> 
> > On Tue, 19 Sep 2023, Michael Ewan wrote:
> > 
> > > You will ultimately have problems with a corrupted file system with ext4,
> > > almost guaranteed. Xfs is a much more robust file system but if you do
> > > not
> > > trust it, then try zfs or btrfs.
> > 
> > Michael,
> > 
> > I've used ext2, ext3, and ext4 with no issues on any of them. I'll stay
> > with
> > what's worked flawlessly with me since 1997.
> > 
> > Thanks,
> > 
> > Rich


Re: [PLUG] Change hard drive FS

2023-09-19 Thread Michael Ewan
I am glad you have not had any problems.  I have had the opposite
experience with ext4 but never a problem with xfs, hence my suggestion.

On Tue, Sep 19, 2023 at 1:25 PM Rich Shepard 
wrote:

> On Tue, 19 Sep 2023, Michael Ewan wrote:
>
> > You will ultimately have problems with a corrupted file system with ext4,
> > almost guaranteed.  Xfs is a much more robust file system but if you do
> not
> > trust it, then try zfs or btrfs.
>
> Michael,
>
> I've used ext2, ext3, and ext4 with no issues on any of them. I'll stay
> with
> what's worked flawlessly with me since 1997.
>
> Thanks,
>
> Rich
>
>


Re: [PLUG] Change hard drive FS

2023-09-19 Thread Rich Shepard

On Tue, 19 Sep 2023, Michael Ewan wrote:


You will ultimately have problems with a corrupted file system with ext4,
almost guaranteed.  Xfs is a much more robust file system but if you do not
trust it, then try zfs or btrfs.


Michael,

I've used ext2, ext3, and ext4 with no issues on any of them. I'll stay with
what's worked flawlessly with me since 1997.

Thanks,

Rich



Re: [PLUG] Change hard drive FS

2023-09-19 Thread Michael Ewan
You will ultimately have problems with a corrupted file system with ext4,
almost guaranteed.  Xfs is a much more robust file system but if you do not
trust it, then try zfs or btrfs.

On Sun, Sep 17, 2023 at 6:26 AM Rich Shepard 
wrote:

> A while ago, when I had backup issues with the logical volume on the
> external MediaSonic Pro enclosure, I removed the LV and formatted the two
> drives to xfs upon advice here. My dirvish backup is on /dev/sde1 and when
> that's done rsync copies daily changes to /dev/sdf1.
>
> I've since learned that xfs has issues and can confirm that's so: the main
> backup drive, /dev/sde1, keeps reporting errors to the kernel which advises
> me to run xfs_repair. (The second backup drive, /dev/sdf1, had to be
> repaired one time.)
>
> I bought a 1T flash drive (each backup hard drive has ~500G on it) and
> today's the day to replace xfs with ext4 on both /dev/sde1 and /dev/sdf1.
>
> It should be a simple process and I'm asking for validation (or correction,
> if warranted) for it:
>
> 1. Use fdisk to install ext4 on 1T flash drive.
> 2. Mount flash drive on /mnt.
> 3. Use scp -R to copy all files from /dev/sde1 to /mnt.
> 4. Use cfdisk to remove xfs from /dev/sde1 and replace it with ext4.
> 5. Use scp -R to copy files from /mnt to /dev/sde1.
>
> Then do the same for /dev/sdf1.
>
> Your thoughts?
>
> TIA,
>
> Rich
>
>


[PLUG] Change hard drive FS

2023-09-17 Thread Rich Shepard

A while ago, when I had backup issues with the logical volume on the
external MediaSonic Pro enclosure, I removed the LV and formatted the two
drives to xfs upon advice here. My dirvish backup is on /dev/sde1 and when
that's done rsync copies daily changes to /dev/sdf1.

I've since learned that xfs has issues and can confirm that's so: the main
backup drive, /dev/sde1, keeps reporting errors to the kernel which advises
me to run xfs_repair. (The second backup drive, /dev/sdf1, had to be
repaired one time.)

I bought a 1T flash drive (each backup hard drive has ~500G on it) and
today's the day to replace xfs with ext4 on both /dev/sde1 and /dev/sdf1.

It should be a simple process and I'm asking for validation (or correction,
if warranted) for it:

1. Use fdisk to install ext4 on 1T flash drive.
2. Mount flash drive on /mnt.
3. Use scp -R to copy all files from /dev/sde1 to /mnt.
4. Use cfdisk to remove xfs from /dev/sde1 and replace it with ext4.
5. Use scp -R to copy files from /mnt to /dev/sde1.

Then do the same for /dev/sdf1.

Your thoughts?

TIA,

Rich