Re: md0 + UUIDs for member disks

2023-12-31 Thread Felix Natter
hello Andy, thanks for the detailed answer. Andy Smith writes: > Hello, > > On Fri, Dec 29, 2023 at 06:17:07PM +0100, Felix Natter wrote: >> Andy Smith writes: >> > But to be absolutely sure you may wish to totally ignore md0 and >> > its member devices during install as all their data and the

Re: md0 + UUIDs for member disks

2023-12-29 Thread Andy Smith
Hello, On Fri, Dec 29, 2023 at 06:17:07PM +0100, Felix Natter wrote: > Andy Smith writes: > > But to be absolutely sure you may wish to totally ignore md0 and > > its member devices during install as all their data and the > > metadata on their member devices will still be there after > >

Re: md0 + UUIDs for member disks

2023-12-29 Thread Felix Natter
hi Steve, thanks for the quick reply! Steve McIntyre writes: > fnat...@gmx.net wrote: >> >>I have /dev/md0 mounted at /storage which consists of two HDDs. >> >>Now I would like to add an SSD drive for better performance of >>VMs. Usually, before doing this,

Re: md0 + UUIDs for member disks

2023-12-29 Thread Felix Natter
mance of >> VMs. Usually, before doing this, I make sure that all of my disks are >> mounted using UUID and not device names. I do not think this is >> the case for the two member HDDs of md0 (cat /proc/mdstat). >> Is there an easy way to fix this? > > What are you t

Re: md0 + UUIDs for member disks

2023-12-29 Thread Steve McIntyre
fnat...@gmx.net wrote: > >I have /dev/md0 mounted at /storage which consists of two HDDs. > >Now I would like to add an SSD drive for better performance of >VMs. Usually, before doing this, I make sure that all of my disks are >mounted using UUID and not device nam

Re: md0 + UUIDs for member disks

2023-12-29 Thread Andy Smith
Hi Felix, On Fri, Dec 29, 2023 at 04:46:10PM +0100, Felix Natter wrote: > I have /dev/md0 mounted at /storage which consists of two HDDs. > > Now I would like to add an SSD drive for better performance of > VMs. Usually, before doing this, I make sure that all of my disks are >

md0 + UUIDs for member disks

2023-12-29 Thread Felix Natter
hello Debian experts, I have /dev/md0 mounted at /storage which consists of two HDDs. Now I would like to add an SSD drive for better performance of VMs. Usually, before doing this, I make sure that all of my disks are mounted using UUID and not device names. I do not think this is the case

Re: Duplicating 'boot sector' for GPT disks in RAID1

2022-10-07 Thread Andrew Wood
On 07/10/2022 15:39, Andy Smith wrote: Yeah, it's a mess with EFI. I wrote this in November 2020: "Redundancy for EFI System Partition: what do people do in 2020?" https://lists.debian.org/debian-user/2020/11/msg00455.html (worth reading replies for more authoritative

Re: Duplicating 'boot sector' for GPT disks in RAID1

2022-10-07 Thread Andy Smith
Hi, On Fri, Oct 07, 2022 at 02:50:06PM +0100, Andrew Wood wrote: > How can I ensure that the system is capable of booting from either remaining > disk in the event of a failure? Yeah, it's a mess with EFI. I wrote this in November 2020: "Redundancy for EFI System Partition: what do people

Duplicating 'boot sector' for GPT disks in RAID1

2022-10-07 Thread Andrew Wood
In the past when Ive had 2 disks in a RAID1 mirror Ive been able replicated the MBR on both such that the system is capable of booting from either in the event of a failure using the command dpkg-reconfigure grub-pc which then allowed both disks to be selected. I was under the impression

Re: Disks renamed after update to 'testing'...?

2020-08-21 Thread David Christensen
On 2020-08-20 14:58, Andy Smith wrote: ... dm-integrity can now be used with LUKS (with or without encryption) to add checksums that force a read error when they don't match. When there is redundancy (e.g. LVM or MD) a read can then come from a good copy and the bad copy will be repaired. So,

Re: Disks renamed after update to 'testing'...?

2020-08-21 Thread tomas
On Thu, Aug 20, 2020 at 01:34:58PM -0700, David Christensen wrote: > On 2020-08-20 08:32, rhkra...@gmail.com wrote: > >On Thursday, August 20, 2020 03:43:55 AM to...@tuxteam.de wrote: > >>Contraty to the other (very valid) points, my backups are always on > >>a LUKS drive, no partition table.

Re: Disks renamed after update to 'testing'...?

2020-08-20 Thread Andy Smith
Hello, On Thu, Aug 20, 2020 at 05:30:20PM -0400, Dan Ritter wrote: > David Christensen wrote: > > Some people have mentioned md RAID. tomas has mentioned LUKS. I believe > > both of them add checksums to the contained contents. So, bit-rot within a > > container should be caught by the

Re: Disks renamed after update to 'testing'...?

2020-08-20 Thread Dan Ritter
David Christensen wrote: > On 2020-08-20 08:32, rhkra...@gmail.com wrote: > I have been pondering bit-rot mitigation on non-checksumming filesystems. > > > Some people have mentioned md RAID. tomas has mentioned LUKS. I believe > both of them add checksums to the contained contents. So,

Re: Disks renamed after update to 'testing'...?

2020-08-20 Thread David Christensen
On 2020-08-20 08:32, rhkra...@gmail.com wrote: On Thursday, August 20, 2020 03:43:55 AM to...@tuxteam.de wrote: Contraty to the other (very valid) points, my backups are always on a LUKS drive, no partition table. Rationale is, should I lose it, the less visible information the better. Best if

Re: Update: Do other owners of WD Gold disks hear a periodic plonk ?

2020-08-20 Thread David Christensen
On 2020-08-20 04:39, Thomas Schmitt wrote: Hi, a while ago i asked about the periodic knocking noise after each 4 seconds of my freshly bought 4 TB HDD WD4003FRYZ. We came to the conclusion that this was a reason to return the disk to the seller. "Click of Death" et.al. I did and got a

Re: Disks renamed after update to 'testing'...?

2020-08-20 Thread rhkramer
On Thursday, August 20, 2020 03:43:55 AM to...@tuxteam.de wrote: > Contraty to the other (very valid) points, my backups are always on > a LUKS drive, no partition table. Rationale is, should I lose it, the > less visible information the better. Best if it looks like a broken > USB stick. No

Re: Update: Do other owners of WD Gold disks hear a periodic plonk ?

2020-08-20 Thread Dan Ritter
Thomas Schmitt wrote: > --- > > So my next question to recent HDD buyers: > > What 4 TB HDD should i want for best reliability and no noises when idle ? > Any recent experiences ? Toshiba and Hitachi (owned by WD but not

Update: Do other owners of WD Gold disks hear a periodic plonk ?

2020-08-20 Thread Thomas Schmitt
Hi, a while ago i asked about the periodic knocking noise after each 4 seconds of my freshly bought 4 TB HDD WD4003FRYZ. We came to the conclusion that this was a reason to return the disk to the seller. "Click of Death" et.al. I did and got a replacement disk. That one knocks less loudly, more

Re: Disks renamed after update to 'testing'...?

2020-08-20 Thread Joe
On Thu, 20 Aug 2020 09:43:55 +0200 wrote: > On Wed, Aug 19, 2020 at 02:41:02PM -0700, David Christensen wrote: > > On 2020-08-19 03:03, Urs Thuermann wrote: > > >David Christensen writes: > > > > >>When using a drive as backup media, are there likely use-cases > > >>that benefit from

Re: Disks renamed after update to 'testing'...?

2020-08-20 Thread tomas
On Wed, Aug 19, 2020 at 02:41:02PM -0700, David Christensen wrote: > On 2020-08-19 03:03, Urs Thuermann wrote: > >David Christensen writes: > > >>When using a drive as backup media, are there likely use-cases that > >>benefit from configuring the drive with no partition, a single PV, > >>single

Re: Disks renamed after update to 'testing'...?

2020-08-19 Thread David Christensen
On 2020-08-19 03:03, Urs Thuermann wrote: David Christensen writes: When using a drive as backup media, are there likely use-cases that benefit from configuring the drive with no partition, a single PV, single VG, single LV, and single filesystem vs. configuring the drive with a single

Re: Disks renamed after update to 'testing'...?

2020-08-19 Thread Urs Thuermann
David Christensen writes: > Thanks for the explanation. It seems that pvcreate(8) places an LVM > disk label and an LVM metadata area onto disks or partitions when > creating a PV; including a unique UUID: > > https://www.man7.org/linux/man-pages/man8/pvcreate.8.html Yes, co

Re: Disks renamed after update to 'testing'...?

2020-08-19 Thread David Christensen
disks or partitions when creating a PV; including a unique UUID: https://www.man7.org/linux/man-pages/man8/pvcreate.8.html When using a drive as backup media, are there likely use-cases that benefit from configuring the drive with no partition, a single PV, single VG, single LV, and single

Re: Disks renamed after update to 'testing'...?

2020-08-19 Thread Urs Thuermann
Urs Thuermann writes: > IMO the best solution is to use LVM. I use it since 2001 on most > drives and I don't have partitions. And I prefer to use device names > over using the *UUID or *LABEL prefixes. With LVM, device names are > predictable /dev/mapper/- with symlinks > /dev//. Following

Re: Disks renamed after update to 'testing'...?

2020-08-19 Thread Urs Thuermann
David Christensen writes: > AIUI the OP was mounting an (external?) drive partition for use as a > destination for backups. Prior to upgrading to Testing, the root > partition was /dev/sda1 (no LVM?) and the backup partition was > /dev/sdb1 (no LVM?). After upgrading to Testing, the root

Re: Disks renamed after update to 'testing'...?

2020-08-18 Thread David Christensen
On 2020-08-18 11:27, Urs Thuermann wrote: "Rick Thomas" writes: The /dev/sdx names for devices have been unpredictable for quite a while. Which one is sda and which sdb will depend on things like timing -- which one gets recognized by the kernel first. The best solution is to either use

Re: Disks renamed after update to 'testing'...?

2020-08-18 Thread Urs Thuermann
"Rick Thomas" writes: > The /dev/sdx names for devices have been unpredictable for quite a > while. Which one is sda and which sdb will depend on things like > timing -- which one gets recognized by the kernel first. > > The best solution is to either use UUID or LABEL when you fsck > and/or

Re: Disks renamed after update to 'testing'...?

2020-08-18 Thread Thomas Schmitt
Hi, i wrote: > > I only deem *UUID as safe, Nicolas George wrote: > UUID can get duplicated too. Just have somebody copy the whole block > device with "good ol' dd". Yes, sure. A HDD of mine got by the Debian installation 128 GPT slots of 128 bytes. So the primary GPT including "protective MBR"

Re: Disks renamed after update to 'testing'...?

2020-08-18 Thread David Christensen
ay be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # # # / was on /dev/sda1 during installation UUID=3f50ca38-20f3-4a12-880c-a1283ac6e41b / ext4 errors=remount-ro 0 1 # swap was on /dev/sda5 during in

Re: Disks renamed after update to 'testing'...?

2020-08-18 Thread Nicolas George
Thomas Schmitt (12020-08-18): > I only deem *UUID as safe, unless the same names on different devices > are intented and always only one of those devices will be connected. UUID can get duplicated too. Just have somebody copy the whole block device with "good ol' dd". Regards, -- Nicolas

Re: Disks renamed after update to 'testing'...?

2020-08-18 Thread Thomas Schmitt
Hi, didier gaumet wrote: > give a name to the underlyning [GPT] partition Let me add the hint that a GPT partition "name" is a user defined string (in fstab and lsblk: PARTLABEL=) whereas the partition UUIDs in GPT get handed out by partition editors automatically as random data (human readable

Re: Disks renamed after update to 'testing'...?

2020-08-18 Thread Rick Thomas
On Mon, Aug 17, 2020, at 4:42 PM, hobie of RMN wrote: > Hi, All - > > My brother has been issuing "mount /dev/sdb1" prior to backing up some > files to a second hard disk. He lately upgraded to 'testing', and it > appears (from result of running df) that what the system now calls > /dev/sdb1 is

Re: Disks renamed after update to 'testing'...?

2020-08-18 Thread didier gaumet
Hello, Apparently, it is also possible to either: - give a name to the filesystem (use e2label to do so, the filesystem being ext4) and mount the filesystem by using this name as a parameter of the mount command instead of /dev/sd* or an UUID - give a name to the underlyning partition (use

Re: Disks renamed after update to 'testing'...?

2020-08-18 Thread john doe
works even if disks are added and removed. See fstab(5). #' -- John Doe

Re: Disks renamed after update to 'testing'...?

2020-08-17 Thread hobie of RMN
tput: root@shelby:~# cat /etc/fstab # /etc/fstab: static file system information. # # Use 'blkid' to print the universally unique identifier for a # device; this may be used with UUID= as a more robust way to name devices # that works even if disks are added and removed. See fstab(5). # #

Re: Disks renamed after update to 'testing'...?

2020-08-17 Thread David Christensen
On 2020-08-17 16:42, hobie of RMN wrote: Hi, All - My brother has been issuing "mount /dev/sdb1" prior to backing up some files to a second hard disk. He lately upgraded to 'testing', and it appears (from result of running df) that what the system now calls /dev/sdb1 is what he has thought of

Disks renamed after update to 'testing'...?

2020-08-17 Thread hobie of RMN
Hi, All - My brother has been issuing "mount /dev/sdb1" prior to backing up some files to a second hard disk. He lately upgraded to 'testing', and it appears (from result of running df) that what the system now calls /dev/sdb1 is what he has thought of as /dev/sda1, the system '/' partition.

Re: Do other owners of WD Gold disks hear a periodic plonk ?

2020-07-25 Thread David Christensen
On 2020-07-25 07:25, Thomas Schmitt wrote: See below for the full output of smartctl -a. I saved one drive that had the click of death (and died shortly thereafter). My notes indicated it passed the manufacturer diagnostic tests, but failed the manufacturer full erase procedure. The last

Re: Do other owners of WD Gold disks hear a periodic plonk ?

2020-07-25 Thread David Christensen
On 2020-07-25 02:16, Thomas Schmitt wrote: David Christensen wrote: Click of death. At least this is the reality which i will present to the disk vendor while negotiating about replacement. But personally i still have doubts that it is this particular problem. The knocking is not

Re: Do other owners of WD Gold disks hear a periodic plonk ?

2020-07-25 Thread Thomas Schmitt
Hi, i quoted smartctl: > > 4 Start_Stop_Count0x0012 100 100 000Old_age Always - > > 18 > > 9 Power_On_Hours 0x0012 100 100 000Old_age Always - > > 19 > > 12 Power_Cycle_Count 0x0032 100 100 000Old_age Always - > >

Re: Do other owners of WD Gold disks hear a periodic plonk ?

2020-07-25 Thread Reco
On Sat, Jul 25, 2020 at 04:25:16PM +0200, Thomas Schmitt wrote: > 4 Start_Stop_Count0x0012 100 100 000Old_age Always > - 18 > 9 Power_On_Hours 0x0012 100 100 000Old_age Always > - 19 > 12 Power_Cycle_Count 0x0032 100

Re: Do other owners of WD Gold disks hear a periodic plonk ?

2020-07-25 Thread Stefan Monnier
> Stefan Monnier wrote: >> I guess it's still spin-down ^ like Sorry, my fingers didn't obey my brain, Stefan

Re: Do other owners of WD Gold disks hear a periodic plonk ?

2020-07-25 Thread Thomas Schmitt
Hi, Reco wrote: > It's all other lines that are interesting here. > I.e. 'Vendor Specific SMART Attributes with Thresholds' section. I wanted to wait with the long list until the self-test is done. See below for the full output of smartctl -a. Stefan Monnier wrote: > I guess it's still

Re: Do other owners of WD Gold disks hear a periodic plonk ?

2020-07-25 Thread Stefan Monnier
> It is its way of saying it's an unsupported feature and you cannot > disable drive heads parking this way. Was worth the shot. I guess it's still spin-down: WD drives support it but just ignore the APM settings of "how long to wait before spin-down" and use their own algorithm instead.

Re: Do other owners of WD Gold disks hear a periodic plonk ?

2020-07-25 Thread Reco
On Sat, Jul 25, 2020 at 11:16:16AM +0200, Thomas Schmitt wrote: > Reco wrote: > > It's simple: > > smartctl -t long /dev/sda > > The short test yielded > Num Test_DescriptionStatus Remaining LifeTime(hours) > LBA_of_first_error > # 1 Short offline Completed

Re: Do other owners of WD Gold disks hear a periodic plonk ?

2020-07-25 Thread Thomas Schmitt
Hi, rhkra...@gmail.com wrote: > You might not have read the entire [Click_of_death] article. Indeed. Now i need to search my old loudspeakers in order to compare the sound file on Wikipedia with the disk's sound. D. R. Evans wrote: > I can say that my experience (YMMV) is that 100% of the

Re: Do other owners of WD Gold disks hear a periodic plonk ?

2020-07-25 Thread Reco
Hi. On Fri, Jul 24, 2020 at 11:35:34PM +0200, Thomas Schmitt wrote: > Reco wrote: > > Have you tried to disable drive heads parking via hdparm? > > hdparm -J ? > The man page says "The factory default is eight (8) seconds". > That would be about twice as long as what i experience. > >

Re: Do other owners of WD Gold disks hear a periodic plonk ?

2020-07-24 Thread David Christensen
On 2020-07-24 14:35, Thomas Schmitt wrote: Hi, Hello. :-) David Christensen wrote: https://en.wikipedia.org/wiki/Click_of_death But that's a different technology (and 20 years ago). I have a few Zip drives on the shelf, but only rarely used them back when. My recent "Click of death"

Re: Do other owners of WD Gold disks hear a periodic plonk ?

2020-07-24 Thread Thomas Amm
On Fri, 2020-07-24 at 12:06 -0700, David Christensen wrote: > On 2020-07-24 11:49, Thomas Schmitt wrote: > > Hi, > > > > i got my new computer with a 4 GB WD Gold (WDC WD4003FRYZ-01F0DB0) > > and > > observe a strange behavior with a provisory Debian 10 LXDE > > installation. > > > > If the

Re: Do other owners of WD Gold disks hear a periodic plonk ?

2020-07-24 Thread D. R. Evans
ght not have read the entire article. > Having experienced this phenomenon multiple times on both Zip disks (!) and hard drives, I can say that my experience (YMMV) is that 100% of the drives that exhibit this phenomenon have failed sometime not long after the phenomenon began -- I recently had a hard

Re: Do other owners of WD Gold disks hear a periodic plonk ?

2020-07-24 Thread elvis
On 25/7/20 4:49 am, Thomas Schmitt wrote: Hi, i got my new computer with a 4 GB WD Gold (WDC WD4003FRYZ-01F0DB0) and observe a strange behavior with a provisory Debian 10 LXDE installation. If the drive has power then i makes a plonking noise every 3 to 5 seconds. The plonk is louder when

Re: Do other owners of WD Gold disks hear a periodic plonk ?

2020-07-24 Thread rhkramer
On Friday, July 24, 2020 05:35:34 PM Thomas Schmitt wrote: > David Christensen wrote: > > https://en.wikipedia.org/wiki/Click_of_death > > But that's a different technology (and 20 years ago). You might not have read the entire article.

Re: Do other owners of WD Gold disks hear a periodic plonk ?

2020-07-24 Thread Thomas Schmitt
Hi, David Christensen wrote: > https://en.wikipedia.org/wiki/Click_of_death But that's a different technology (and 20 years ago). > If you cannot return the drive, I would download, install, and run > "Data Lifeguard Diagnostic for Windows": >

Re: Do other owners of WD Gold disks hear a periodic plonk ?

2020-07-24 Thread David Christensen
On 2020-07-24 12:14, Reco wrote: Hi. On Fri, Jul 24, 2020 at 08:49:42PM +0200, Thomas Schmitt wrote: Hi, i got my new computer with a 4 GB WD Gold (WDC WD4003FRYZ-01F0DB0) and observe a strange behavior with a provisory Debian 10 LXDE installation. If the drive has power then i makes

Re: Do other owners of WD Gold disks hear a periodic plonk ?

2020-07-24 Thread Reco
Hi. On Fri, Jul 24, 2020 at 08:49:42PM +0200, Thomas Schmitt wrote: > Hi, > > i got my new computer with a 4 GB WD Gold (WDC WD4003FRYZ-01F0DB0) and > observe a strange behavior with a provisory Debian 10 LXDE installation. > > If the drive has power then i makes a plonking noise every

Re: Do other owners of WD Gold disks hear a periodic plonk ?

2020-07-24 Thread Dan Ritter
Thomas Schmitt wrote: > If the drive has power then i makes a plonking noise every 3 to 5 > seconds. The plonk is louder when Debian runs, but can also be > heard (and felt by direct finger contact with the disk) if only EFI > is running. The sound is not really loud but well hearable when the >

Re: Do other owners of WD Gold disks hear a periodic plonk ?

2020-07-24 Thread David Christensen
On 2020-07-24 11:49, Thomas Schmitt wrote: Hi, i got my new computer with a 4 GB WD Gold (WDC WD4003FRYZ-01F0DB0) and observe a strange behavior with a provisory Debian 10 LXDE installation. If the drive has power then i makes a plonking noise every 3 to 5 seconds. The plonk is louder when

Do other owners of WD Gold disks hear a periodic plonk ?

2020-07-24 Thread Thomas Schmitt
Hi, i got my new computer with a 4 GB WD Gold (WDC WD4003FRYZ-01F0DB0) and observe a strange behavior with a provisory Debian 10 LXDE installation. If the drive has power then i makes a plonking noise every 3 to 5 seconds. The plonk is louder when Debian runs, but can also be heard (and felt by

Re: Hard disks auto-spinning-down

2019-10-29 Thread Alex Mestiashvili
writing the email above as far as I remember. I just didn't know how long would it take until it is approved :) >> > Thanks for all this help, guys. Does anyone have any thoughts on why one > generation of an external disk cage wouldn't require this and just spun > down the disks auto

Re: Hard disks auto-spinning-down

2019-10-28 Thread Mark Fletcher
> See man hd-idle for the details. > > One could also write to debian-backports, CC: the maintainer and ask > nicely for a backport ;) > Thanks for all this help, guys. Does anyone have any thoughts on why one generation of an external disk cage wouldn't require this and just spun

Re: Hard disks auto-spinning-down

2019-10-28 Thread Andrei POPESCU
On Ma, 01 oct 19, 15:49:57, Alex Mestiashvili wrote: > > You may want to try hd-idle, it is not yet available in stable, but one > can install it from testing (it is not advisable in general, but the > divergence between buster and testing is not that big right now) > wget it from >

Re: Hard disks auto-spinning-down

2019-10-01 Thread Alex Mestiashvili
On 9/29/19 1:30 PM, Mark Fletcher wrote: > Since a fresh install of buster, an external USB3 hard disk cage from > Terramaster that I own is not automatically spinning down the disks in > it when they go unused for a time. > > I used a previous generation of the cage with Str

Re: Hard disks auto-spinning-down

2019-10-01 Thread Stefan Monnier
>     logger "Setting spindown on disk drive: $DISK_DEV" >     sdparm --flexible -6 --set SCT=4000 $DISK_DEV >     sdparm --flexible -6 --set STANDBY=1 $DISK_DEV I found sdparm inscrutable so far so I'm really curious how you came up with the above incantation (`sdparm -al /dev/sdb`

Re: Hard disks auto-spinning-down

2019-09-30 Thread Mark Fletcher
On Mon, Sep 30, 2019 at 01:41:37AM -0700, B wrote: > > > On 9/29/19 4:30 AM, Mark Fletcher wrote: > > Any thoughts on where I might look to find settings that can be tweaked > > to make it spin down when idle? > > > See sdparm and hdparm tools. hdparm is probably the wrong tool because it's >

Re: Hard disks auto-spinning-down

2019-09-30 Thread B
On 9/29/19 4:30 AM, Mark Fletcher wrote: Any thoughts on where I might look to find settings that can be tweaked to make it spin down when idle? See sdparm and hdparm tools. hdparm is probably the wrong tool because it's for internal drives connected to IDE/ATA/SATA busses. The reason

Re: Rescuing hard disks

2019-09-29 Thread Mark Fletcher
On Fri, Sep 27, 2019 at 01:41:58PM +0100, Jonathan Dowland wrote: > On Mon, Sep 23, 2019 at 03:11:07PM +0500, Alexander V. Makartsev wrote: > > If I understood this right, you have two disks with data and they were > > previously configured as RAID1 volume. > > What make\mod

Hard disks auto-spinning-down

2019-09-29 Thread Mark Fletcher
Since a fresh install of buster, an external USB3 hard disk cage from Terramaster that I own is not automatically spinning down the disks in it when they go unused for a time. I used a previous generation of the cage with Stretch previously, it spun down the disks when they were not in use

Re: Rescuing hard disks

2019-09-27 Thread Jonathan Dowland
On Mon, Sep 23, 2019 at 03:11:07PM +0500, Alexander V. Makartsev wrote: If I understood this right, you have two disks with data and they were previously configured as RAID1 volume. What make\model RAID-controller do you use? Because "cages" by themselves offer only SATA\SAS ports

Re: Rescuing hard disks

2019-09-24 Thread David Christensen
On 9/24/19 7:00 AM, David wrote: On Tue, 24 Sep 2019 at 07:50, Mark Fletcher wrote: On Mon, Sep 23, 2019 at 03:07:15AM -, Debian Buster wrote: 2. create the parttion exactly as it was. system. I recognise that was only that easy to do because I knew that the original partition

Re: Rescuing hard disks

2019-09-24 Thread David
On Tue, 24 Sep 2019 at 07:50, Mark Fletcher wrote: > On Mon, Sep 23, 2019 at 03:07:15AM -, Debian Buster wrote: > > 2. create the parttion exactly as it was. > system. I recognise that was only that easy to do because I knew that > the original partition arrangement was so simple. If it had

Re: Rescuing hard disks

2019-09-23 Thread Mark Fletcher
On Mon, Sep 23, 2019 at 03:07:15AM -, Debian Buster wrote: > > Posible Options: > 1. if you use lilo, look for a copy of parttions table. > 2. create the parttion exactly as it was. I'm running GRUB not lilo -- used lilo back in the 90's but switched to grub whenever Debian started

Re: Rescuing hard disks

2019-09-23 Thread Alexander V. Makartsev
On 23.09.2019 3:40, Mark Fletcher wrote: > Hello > > While setting up a newly purchased RAID-capable hard disk cage I've > damaged the contents of 2 hard disks and want to know if it is possible > to recover. > > The cage has 5 disk slots each occupied by 3TB hard disks. 4

Re: Rescuing hard disks

2019-09-22 Thread tv.deb...@googlemail.com
On 23/09/2019 08:37, Debian Buster wrote: On Sun, 22 Sep 2019 23:40:51 +0100, Mark Fletcher wrote: Hello While setting up a newly purchased RAID-capable hard disk cage I've damaged the contents of 2 hard disks and want to know if it is possible to recover. The cage has 5 disk slots each

Re: Rescuing hard disks

2019-09-22 Thread Debian Buster
On Sun, 22 Sep 2019 23:40:51 +0100, Mark Fletcher wrote: > Hello > > While setting up a newly purchased RAID-capable hard disk cage I've > damaged the contents of 2 hard disks and want to know if it is possible > to recover. > > The cage has 5 disk slots each occupied

Rescuing hard disks

2019-09-22 Thread Mark Fletcher
Hello While setting up a newly purchased RAID-capable hard disk cage I've damaged the contents of 2 hard disks and want to know if it is possible to recover. The cage has 5 disk slots each occupied by 3TB hard disks. 4 of the disks came from an older cage by the same maker (TerraMaster

Re: My external disks no longer mont

2019-06-28 Thread Jean-Philippe MENGUAL
Hi, Thanks for your reply. After reboot, the problem is fixed. Frankly it is strange as I had already rebooted, I just have one kernel release, but well... it works. Thanks for your help Regards Jean-Philippe MENGUAL Le 28/06/2019 à 11:21, Reco a écrit : > Hi. > > On Fri, Jun 28,

Re: My external disks no longer mont

2019-06-28 Thread Reco
Hi. On Fri, Jun 28, 2019 at 11:09:37AM +0200, Jean-Philippe MENGUAL wrote: > > Here is what I get now when I plug in a USB stick or disk: > https://paste.debian.net/1089600/ > > I think it is a bug, as I use Sid. Where should I report? Kernel? udev? > systemd? A relevant part of

My external disks no longer mont

2019-06-28 Thread Jean-Philippe MENGUAL
Hi, Here is what I get now when I plug in a USB stick or disk: https://paste.debian.net/1089600/ I think it is a bug, as I use Sid. Where should I report? Kernel? udev? systemd? Best regards, -- Jean-Philippe MENGUAL

Re: New Hardware + old disks not recognized

2019-06-26 Thread Pascal Hambourg
Le 26/06/2019 à 20:15, Ross Boylan a écrit : So do you think the chroot generated initrd would have been OK if I'd mounted proc? Yes.

Re: New Hardware + old disks not recognized

2019-06-26 Thread Ross Boylan
On Tue, Jun 25, 2019 at 12:31 PM Pascal Hambourg wrote: > > Le 24/06/2019 à 01:40, Ross Boylan a écrit : > > > > # update-initramfs -u -k 4.19.0-5-amd64 > > update-initramfs: Generating /boot/initrd.img-4.19.0-5-amd64 > > /usr/share/initramfs-tools/hooks/cryptroot: 64: > >

Re: New Hardware + old disks not recognized

2019-06-25 Thread Pascal Hambourg
Le 24/06/2019 à 01:40, Ross Boylan a écrit : # update-initramfs -u -k 4.19.0-5-amd64 update-initramfs: Generating /boot/initrd.img-4.19.0-5-amd64 /usr/share/initramfs-tools/hooks/cryptroot: 64: /usr/share/initramfs-tools/hooks/cryptroot: cannot open /proc/mounts: No such file cryptsetup:

Re: New Hardware + old disks not recognized [SOLVED + lessons learned]

2019-06-24 Thread David Wright
containing MODULES=dep) if you select "targeted". > 4. Manually constructing or modifying an initrd is pretty delicate business. > 5. systemd seems to react very badly to a mount failure in fstab. One strategic decision I've made for years is to always have two root partitions o

Re: New Hardware + old disks not recognized [SOLVED + lessons learned]

2019-06-24 Thread songbird
Ross Boylan wrote: ... > LESSONS > 1. Don't ever get in this situation. It's a mess. > 2. If you're planning on moving to new hardware, ensure your initrd is > generated with MODULES=most BEFORE the move. > 3. Just because initramfs.conf has MODULES=most doesn't mean that's > what you get. In my

Re: New Hardware + old disks not recognized [SOLVED + lessons learned]

2019-06-24 Thread Ross Boylan
ORIGINAL PROBLEM Move disks into a new system (hardware) and found I couldn't boot into the old system (OS on the disk). The initrd couldn't even see the drives. CAUSE New hardware required drivers missing from the initrd, which had been created with MODULES=dep. Also, fstab referenced a drive

Re: New Hardware + old disks not recognized

2019-06-23 Thread Ross Boylan
]: Samsung Electronics Co Ltd NVMe SSD Controller SM961/PM961 [144d:a804] First try: chroot into the old system and update-initramfs. The result can see the disks (yay) but has no, or at least insufficient, crypto. Second try: rsync from the old initrd to the new one.* Considered using -u

Re: New Hardware + old disks not recognized

2019-06-23 Thread songbird
Ross Boylan wrote: > In brief: moved all the 3.5" disks from an old system to a new one, > and now I can't boot into buster. In the initrd environment no disks > appear in /dev; the disks are all connected through an LSI Host Bus > Adapter card (only on the new system). I can

Re: New Hardware + old disks not recognized

2019-06-23 Thread Pascal Hambourg
Le 23/06/2019 à 04:06, Ross Boylan a écrit : My leading suspect for why the hard disks aren't recognized is that they are all attached through 02:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2116 PCI-Express Fusion-MPT SAS-2 [Meteor] (rev 02) whereas before they were

New Hardware + old disks not recognized

2019-06-22 Thread Ross Boylan
In brief: moved all the 3.5" disks from an old system to a new one, and now I can't boot into buster. In the initrd environment no disks appear in /dev; the disks are all connected through an LSI Host Bus Adapter card (only on the new system). I can boot into Ubuntu on the new system, and

Re: Disk setup to boot both RAID1 disks

2019-05-11 Thread Pascal Hambourg
Le 11/05/2019 à 14:03, An Liu a écrit : Hi, Pascal, I personally didn't use --removable option while install GRUB, but built a standalone EFI boot executable for grub (say, grubx64.efi with modules),it could be copied everywhere you want, and it do safe me many times. Are these two the same

Re: Disk setup to boot both RAID1 disks

2019-05-11 Thread An Liu
Hi, Pascal, I personally didn't use --removable option while install GRUB, but built a standalone EFI boot executable for grub (say, grubx64.efi with modules),it could be copied everywhere you want, and it do safe me many times. Are these two the same thing? > My method of choice would be to

Re: Disk setup to boot both RAID1 disks

2019-05-11 Thread Pascal Hambourg
Le 11/05/2019 à 12:18, Rory Campbell-Lange a écrit : I wish to configure a server with two SATA connected SSD disks in software RAID1. I'm not sure how best to configure the disks for boot, particularly in UEFI boot mode, to ensure that /dev/sdb can be booted if (for example) /dev/sda fails. I

Disk setup to boot both RAID1 disks

2019-05-11 Thread Rory Campbell-Lange
I wish to configure a server with two SATA connected SSD disks in software RAID1. I'm not sure how best to configure the disks for boot, particularly in UEFI boot mode, to ensure that /dev/sdb can be booted if (for example) /dev/sda fails. I have been configuring each disk as follows: Device

Re: mdadm started before disks are detected

2019-03-26 Thread Alexander V. Makartsev
On 25.03.2019 0:50, Mimiko wrote: > hello. > > I came across a problem when booting. In the server is installed 4 > disks connected to raid controller and 2 disks connected to > motherboard sata interfaces. During booting disks connected to raid > controller are detected before

Re: mdadm started before disks are detected

2019-03-26 Thread Sven Hartge
Mimiko wrote: > On 25.03.2019 21:23, Sven Hartge wrote: >> Please update your system, Debian 6.0 Squeeze is severely out of date >> and should NOT be used for any production systems and even less when >> connected to the public Internet. > Yes I know. But this is not the answer I'm searching.

Re: mdadm started before disks are detected

2019-03-26 Thread Mimiko
On 25.03.2019 21:23, Sven Hartge wrote: Please update your system, Debian 6.0 Squeeze is severely out of date and should NOT be used for any production systems and even less when connected to the public Internet. Yes I know. But this is not the answer I'm searching.

Re: mdadm started before disks are detected

2019-03-25 Thread John Hasler
> Please update your system, Debian 6.0 Squeeze is severely out of date > and should NOT be used for any production systems and even less when > connected to the public Internet. It certainly should not be connected to the Internet but there is no reason not to keep using it for off-line systems

Re: mdadm started before disks are detected

2019-03-25 Thread Sven Hartge
Mimiko wrote: > Linux 2.6.32-5-amd64 #1 SMP Tue May 13 16:34:35 UTC 2014 x86_64 GNU/Linux Please update your system, Debian 6.0 Squeeze is severely out of date and should NOT be used for any production systems and even less when connected to the public Internet. Grüße, Sven -- Sigmentation

Re: mdadm started before disks are detected

2019-03-25 Thread Mimiko
On 24.03.2019 22:46, Sven Hartge wrote: I came across a problem when booting. In the server is installed 4 disks connected to raid controller and 2 disks connected to motherboard sata interfaces. During booting disks connected to raid controller are detected before md raid assembling process

Re: mdadm started before disks are detected

2019-03-24 Thread Sven Hartge
Mimiko wrote: > I came across a problem when booting. In the server is installed 4 > disks connected to raid controller and 2 disks connected to > motherboard sata interfaces. During booting disks connected to raid > controller are detected before md raid assembling process, while t

  1   2   3   4   5   6   7   8   9   10   >