On Tue 16 Jan 2024 at 20:08:12 (-0500), gene heskett wrote:
> On 1/16/24 00:56, Felix Miata wrote:
> > gene heskett composed on 2024-01-15 17:56 (UTC-0500):
> > 
> > > Thanks for that composition: but it will be word wrapped:
> > > root@coyote:~# for j in /dev/disk/by-id/* ; do printf '%s\t%s\n'
> > > "$(realpath "$j")" "$j" ; done
[ … ]
> > I straightened out the wrapping mess, and gave each entry a line number. I 
> > see
> > nothing I recognize as representing serial number duplication among /dev/sdX
> > (physical device) names:
> > [ … ]
> > /dev/sdd    19  /dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R302507V
> > /dev/sdd    20  /dev/disk/by-id/wwn-0x5002538f413394ae
> > /dev/sdd1   21  
> > /dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R302507V-part1
> > /dev/sdd1   22  /dev/disk/by-id/wwn-0x5002538f413394ae-part1
> > /dev/sdd2   23  
> > /dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R302507V-part2
> > /dev/sdd2   24  /dev/disk/by-id/wwn-0x5002538f413394ae-part2
> > /dev/sdd3   25  
> > /dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R302507V-part3
> > /dev/sdd3   26  /dev/disk/by-id/wwn-0x5002538f413394ae-part3
> > /dev/sde    27  /dev/disk/by-id/wwn-0x5002538f413394a9
> > /dev/sde    28  /dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R302502E
> > /dev/sde1   29  
> > /dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R302502E-part1
> > /dev/sde1   30  /dev/disk/by-id/wwn-0x5002538f413394a9-part1
> > /dev/sde2   31  
> > /dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R302502E-part2
> > /dev/sde2   32  /dev/disk/by-id/wwn-0x5002538f413394a9-part2
> > /dev/sde3   33  
> > /dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R302502E-part3
> > /dev/sde3   34  /dev/disk/by-id/wwn-0x5002538f413394a9-part3
> > /dev/sdf    35  /dev/disk/by-id/wwn-0x5002538f413394a5
> > /dev/sdf    36  /dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R302498T
> > /dev/sdf1   37  
> > /dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R302498T-part1
> > /dev/sdf1   38  /dev/disk/by-id/wwn-0x5002538f413394a5-part1
> > /dev/sdf2   39  
> > /dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R302498T-part2
> > /dev/sdf2   40  /dev/disk/by-id/wwn-0x5002538f413394a5-part2
> > /dev/sdf3   41  
> > /dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R302498T-part3
> > /dev/sdf3   42  /dev/disk/by-id/wwn-0x5002538f413394a5-part3
> > /dev/sdg    43  /dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R302509W
> > /dev/sdg    44  /dev/disk/by-id/wwn-0x5002538f413394b0
> > /dev/sdg1   45  
> > /dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R302509W-part1
> > /dev/sdg1   46  /dev/disk/by-id/wwn-0x5002538f413394b0-part1
> > /dev/sdg2   47  
> > /dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R302509W-part2
> > /dev/sdg2   48  /dev/disk/by-id/wwn-0x5002538f413394b0-part2
> > /dev/sdg3   49  
> > /dev/disk/by-id/ata-Samsung_SSD_870_EVO_1TB_S626NF0R302509W-part3
> > /dev/sdg3   50  /dev/disk/by-id/wwn-0x5002538f413394b0-part3

> lsblk, which I've published several times, shows 5 drives. by-id
> listing only shows 3. The drive I've been trying to use bounces from
> /dev/sdd to sde to sdh dependin on which controller it is curently
> plugged into.

I take it that you're trying to copy to one Gigastone SSD. Presumably
the kernel favours some controllers over others in the race to name
them. This is why using the kernel's device names is no longer
recommended.

> And I've since tried cp in addition to rsync, does the same thing,
> killing the sysytem with the OOM but much quicker. cp using all system
> memory (32Gb) in 1 minute, another 500K into swap adds another 15
> secs, and the OOM kills the system. So both cp and rsync act broken.

I'd be tempted to bisect the problem by copying to another machine
though a cat5 cable.

> rsync, with a --bwlimit=3m set, takes much longer to kill the system
> but the amount of data moved is very similar, 13.5G from clean disk to
> system freeze for rsync, 13.4G for cp.

I don't know enough about how rsync behaves to interpret that
coincidence, but it seems ominous on its face.

Cheers,
David.

Reply via email to