hello Andy,
thanks for the detailed answer.
Andy Smith writes:
> Hello,
>
> On Fri, Dec 29, 2023 at 06:17:07PM +0100, Felix Natter wrote:
>> Andy Smith writes:
>> > But to be absolutely sure you may wish to totally ignore md0 and
>> > its member devices during install as all their data and the
Hello,
On Fri, Dec 29, 2023 at 06:17:07PM +0100, Felix Natter wrote:
> Andy Smith writes:
> > But to be absolutely sure you may wish to totally ignore md0 and
> > its member devices during install as all their data and the
> > metadata on their member devices will still be there after
> >
hi Steve,
thanks for the quick reply!
Steve McIntyre writes:
> fnat...@gmx.net wrote:
>>
>>I have /dev/md0 mounted at /storage which consists of two HDDs.
>>
>>Now I would like to add an SSD drive for better performance of
>>VMs. Usually, before doing this,
mance of
>> VMs. Usually, before doing this, I make sure that all of my disks are
>> mounted using UUID and not device names. I do not think this is
>> the case for the two member HDDs of md0 (cat /proc/mdstat).
>> Is there an easy way to fix this?
>
> What are you t
fnat...@gmx.net wrote:
>
>I have /dev/md0 mounted at /storage which consists of two HDDs.
>
>Now I would like to add an SSD drive for better performance of
>VMs. Usually, before doing this, I make sure that all of my disks are
>mounted using UUID and not device nam
Hi Felix,
On Fri, Dec 29, 2023 at 04:46:10PM +0100, Felix Natter wrote:
> I have /dev/md0 mounted at /storage which consists of two HDDs.
>
> Now I would like to add an SSD drive for better performance of
> VMs. Usually, before doing this, I make sure that all of my disks are
>
hello Debian experts,
I have /dev/md0 mounted at /storage which consists of two HDDs.
Now I would like to add an SSD drive for better performance of
VMs. Usually, before doing this, I make sure that all of my disks are
mounted using UUID and not device names. I do not think this is
the case
On 07/10/2022 15:39, Andy Smith wrote:
Yeah, it's a mess with EFI. I wrote this in November 2020:
"Redundancy for EFI System Partition: what do people do in
2020?"
https://lists.debian.org/debian-user/2020/11/msg00455.html
(worth reading replies for more authoritative
Hi,
On Fri, Oct 07, 2022 at 02:50:06PM +0100, Andrew Wood wrote:
> How can I ensure that the system is capable of booting from either remaining
> disk in the event of a failure?
Yeah, it's a mess with EFI. I wrote this in November 2020:
"Redundancy for EFI System Partition: what do people
In the past when Ive had 2 disks in a RAID1 mirror Ive been able
replicated the MBR on both such that the system is capable of booting
from either in the event of a failure using the command
dpkg-reconfigure grub-pc which then allowed both disks to be selected.
I was under the impression
On 2020-08-20 14:58, Andy Smith wrote:
... dm-integrity can now be used with LUKS (with or
without encryption) to add checksums that force a read error when
they don't match. When there is redundancy (e.g. LVM or MD) a read
can then come from a good copy and the bad copy will be repaired.
So,
On Thu, Aug 20, 2020 at 01:34:58PM -0700, David Christensen wrote:
> On 2020-08-20 08:32, rhkra...@gmail.com wrote:
> >On Thursday, August 20, 2020 03:43:55 AM to...@tuxteam.de wrote:
> >>Contraty to the other (very valid) points, my backups are always on
> >>a LUKS drive, no partition table.
Hello,
On Thu, Aug 20, 2020 at 05:30:20PM -0400, Dan Ritter wrote:
> David Christensen wrote:
> > Some people have mentioned md RAID. tomas has mentioned LUKS. I believe
> > both of them add checksums to the contained contents. So, bit-rot within a
> > container should be caught by the
David Christensen wrote:
> On 2020-08-20 08:32, rhkra...@gmail.com wrote:
> I have been pondering bit-rot mitigation on non-checksumming filesystems.
>
>
> Some people have mentioned md RAID. tomas has mentioned LUKS. I believe
> both of them add checksums to the contained contents. So,
On 2020-08-20 08:32, rhkra...@gmail.com wrote:
On Thursday, August 20, 2020 03:43:55 AM to...@tuxteam.de wrote:
Contraty to the other (very valid) points, my backups are always on
a LUKS drive, no partition table. Rationale is, should I lose it, the
less visible information the better. Best if
On 2020-08-20 04:39, Thomas Schmitt wrote:
Hi,
a while ago i asked about the periodic knocking noise after each 4 seconds
of my freshly bought 4 TB HDD WD4003FRYZ. We came to the conclusion that
this was a reason to return the disk to the seller. "Click of Death" et.al.
I did and got a
On Thursday, August 20, 2020 03:43:55 AM to...@tuxteam.de wrote:
> Contraty to the other (very valid) points, my backups are always on
> a LUKS drive, no partition table. Rationale is, should I lose it, the
> less visible information the better. Best if it looks like a broken
> USB stick. No
Thomas Schmitt wrote:
> ---
>
> So my next question to recent HDD buyers:
>
> What 4 TB HDD should i want for best reliability and no noises when idle ?
> Any recent experiences ?
Toshiba and Hitachi (owned by WD but not
Hi,
a while ago i asked about the periodic knocking noise after each 4 seconds
of my freshly bought 4 TB HDD WD4003FRYZ. We came to the conclusion that
this was a reason to return the disk to the seller. "Click of Death" et.al.
I did and got a replacement disk. That one knocks less loudly, more
On Thu, 20 Aug 2020 09:43:55 +0200
wrote:
> On Wed, Aug 19, 2020 at 02:41:02PM -0700, David Christensen wrote:
> > On 2020-08-19 03:03, Urs Thuermann wrote:
> > >David Christensen writes:
> >
> > >>When using a drive as backup media, are there likely use-cases
> > >>that benefit from
On Wed, Aug 19, 2020 at 02:41:02PM -0700, David Christensen wrote:
> On 2020-08-19 03:03, Urs Thuermann wrote:
> >David Christensen writes:
>
> >>When using a drive as backup media, are there likely use-cases that
> >>benefit from configuring the drive with no partition, a single PV,
> >>single
On 2020-08-19 03:03, Urs Thuermann wrote:
David Christensen writes:
When using a drive as backup media, are there likely use-cases that
benefit from configuring the drive with no partition, a single PV,
single VG, single LV, and single filesystem vs. configuring the drive
with a single
David Christensen writes:
> Thanks for the explanation. It seems that pvcreate(8) places an LVM
> disk label and an LVM metadata area onto disks or partitions when
> creating a PV; including a unique UUID:
>
> https://www.man7.org/linux/man-pages/man8/pvcreate.8.html
Yes, co
disks or partitions when
creating a PV; including a unique UUID:
https://www.man7.org/linux/man-pages/man8/pvcreate.8.html
When using a drive as backup media, are there likely use-cases that
benefit from configuring the drive with no partition, a single PV,
single VG, single LV, and single
Urs Thuermann writes:
> IMO the best solution is to use LVM. I use it since 2001 on most
> drives and I don't have partitions. And I prefer to use device names
> over using the *UUID or *LABEL prefixes. With LVM, device names are
> predictable /dev/mapper/- with symlinks
> /dev//.
Following
David Christensen writes:
> AIUI the OP was mounting an (external?) drive partition for use as a
> destination for backups. Prior to upgrading to Testing, the root
> partition was /dev/sda1 (no LVM?) and the backup partition was
> /dev/sdb1 (no LVM?). After upgrading to Testing, the root
On 2020-08-18 11:27, Urs Thuermann wrote:
"Rick Thomas" writes:
The /dev/sdx names for devices have been unpredictable for quite a
while. Which one is sda and which sdb will depend on things like
timing -- which one gets recognized by the kernel first.
The best solution is to either use
"Rick Thomas" writes:
> The /dev/sdx names for devices have been unpredictable for quite a
> while. Which one is sda and which sdb will depend on things like
> timing -- which one gets recognized by the kernel first.
>
> The best solution is to either use UUID or LABEL when you fsck
> and/or
Hi,
i wrote:
> > I only deem *UUID as safe,
Nicolas George wrote:
> UUID can get duplicated too. Just have somebody copy the whole block
> device with "good ol' dd".
Yes, sure. A HDD of mine got by the Debian installation 128 GPT slots of
128 bytes. So the primary GPT including "protective MBR"
ay be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
#
# / was on /dev/sda1 during installation
UUID=3f50ca38-20f3-4a12-880c-a1283ac6e41b / ext4
errors=remount-ro 0 1
# swap was on /dev/sda5 during in
Thomas Schmitt (12020-08-18):
> I only deem *UUID as safe, unless the same names on different devices
> are intented and always only one of those devices will be connected.
UUID can get duplicated too. Just have somebody copy the whole block
device with "good ol' dd".
Regards,
--
Nicolas
Hi,
didier gaumet wrote:
> give a name to the underlyning [GPT] partition
Let me add the hint that a GPT partition "name" is a user defined string
(in fstab and lsblk: PARTLABEL=) whereas the partition UUIDs in GPT
get handed out by partition editors automatically as random data
(human readable
On Mon, Aug 17, 2020, at 4:42 PM, hobie of RMN wrote:
> Hi, All -
>
> My brother has been issuing "mount /dev/sdb1" prior to backing up some
> files to a second hard disk. He lately upgraded to 'testing', and it
> appears (from result of running df) that what the system now calls
> /dev/sdb1 is
Hello,
Apparently, it is also possible to either:
- give a name to the filesystem (use e2label to do so, the filesystem being
ext4) and mount the filesystem by using this name as a parameter of the mount
command instead of /dev/sd* or an UUID
- give a name to the underlyning partition (use
works even if disks are added and removed. See fstab(5).
#'
--
John Doe
tput:
root@shelby:~# cat /etc/fstab
# /etc/fstab: static file system information.
#
# Use 'blkid' to print the universally unique identifier for a
# device; this may be used with UUID= as a more robust way to name devices
# that works even if disks are added and removed. See fstab(5).
#
#
On 2020-08-17 16:42, hobie of RMN wrote:
Hi, All -
My brother has been issuing "mount /dev/sdb1" prior to backing up some
files to a second hard disk. He lately upgraded to 'testing', and it
appears (from result of running df) that what the system now calls
/dev/sdb1 is what he has thought of
Hi, All -
My brother has been issuing "mount /dev/sdb1" prior to backing up some
files to a second hard disk. He lately upgraded to 'testing', and it
appears (from result of running df) that what the system now calls
/dev/sdb1 is what he has thought of as /dev/sda1, the system '/'
partition.
On 2020-07-25 07:25, Thomas Schmitt wrote:
See below for the full output of smartctl -a.
I saved one drive that had the click of death (and died shortly
thereafter). My notes indicated it passed the manufacturer diagnostic
tests, but failed the manufacturer full erase procedure. The last
On 2020-07-25 02:16, Thomas Schmitt wrote:
David Christensen wrote:
Click of death.
At least this is the reality which i will present to the disk vendor
while negotiating about replacement.
But personally i still have doubts that it is this particular problem.
The knocking is not
Hi,
i quoted smartctl:
> > 4 Start_Stop_Count0x0012 100 100 000Old_age Always -
> > 18
> > 9 Power_On_Hours 0x0012 100 100 000Old_age Always -
> > 19
> > 12 Power_Cycle_Count 0x0032 100 100 000Old_age Always -
> >
On Sat, Jul 25, 2020 at 04:25:16PM +0200, Thomas Schmitt wrote:
> 4 Start_Stop_Count0x0012 100 100 000Old_age Always
> - 18
> 9 Power_On_Hours 0x0012 100 100 000Old_age Always
> - 19
> 12 Power_Cycle_Count 0x0032 100
> Stefan Monnier wrote:
>> I guess it's still spin-down
^
like
Sorry, my fingers didn't obey my brain,
Stefan
Hi,
Reco wrote:
> It's all other lines that are interesting here.
> I.e. 'Vendor Specific SMART Attributes with Thresholds' section.
I wanted to wait with the long list until the self-test is done.
See below for the full output of smartctl -a.
Stefan Monnier wrote:
> I guess it's still
> It is its way of saying it's an unsupported feature and you cannot
> disable drive heads parking this way. Was worth the shot.
I guess it's still spin-down: WD drives support it but just ignore the
APM settings of "how long to wait before spin-down" and use their own
algorithm instead.
On Sat, Jul 25, 2020 at 11:16:16AM +0200, Thomas Schmitt wrote:
> Reco wrote:
> > It's simple:
> > smartctl -t long /dev/sda
>
> The short test yielded
> Num Test_DescriptionStatus Remaining LifeTime(hours)
> LBA_of_first_error
> # 1 Short offline Completed
Hi,
rhkra...@gmail.com wrote:
> You might not have read the entire [Click_of_death] article.
Indeed. Now i need to search my old loudspeakers in order to compare
the sound file on Wikipedia with the disk's sound.
D. R. Evans wrote:
> I can say that my experience (YMMV) is that 100% of the
Hi.
On Fri, Jul 24, 2020 at 11:35:34PM +0200, Thomas Schmitt wrote:
> Reco wrote:
> > Have you tried to disable drive heads parking via hdparm?
>
> hdparm -J ?
> The man page says "The factory default is eight (8) seconds".
> That would be about twice as long as what i experience.
>
>
On 2020-07-24 14:35, Thomas Schmitt wrote:
Hi,
Hello. :-)
David Christensen wrote:
https://en.wikipedia.org/wiki/Click_of_death
But that's a different technology (and 20 years ago).
I have a few Zip drives on the shelf, but only rarely used them back
when. My recent "Click of death"
On Fri, 2020-07-24 at 12:06 -0700, David Christensen wrote:
> On 2020-07-24 11:49, Thomas Schmitt wrote:
> > Hi,
> >
> > i got my new computer with a 4 GB WD Gold (WDC WD4003FRYZ-01F0DB0)
> > and
> > observe a strange behavior with a provisory Debian 10 LXDE
> > installation.
> >
> > If the
ght not have read the entire article.
>
Having experienced this phenomenon multiple times on both Zip disks (!) and
hard drives, I can say that my experience (YMMV) is that 100% of the drives
that exhibit this phenomenon have failed sometime not long after the
phenomenon began -- I recently had a hard
On 25/7/20 4:49 am, Thomas Schmitt wrote:
Hi,
i got my new computer with a 4 GB WD Gold (WDC WD4003FRYZ-01F0DB0) and
observe a strange behavior with a provisory Debian 10 LXDE installation.
If the drive has power then i makes a plonking noise every 3 to 5
seconds. The plonk is louder when
On Friday, July 24, 2020 05:35:34 PM Thomas Schmitt wrote:
> David Christensen wrote:
> > https://en.wikipedia.org/wiki/Click_of_death
>
> But that's a different technology (and 20 years ago).
You might not have read the entire article.
Hi,
David Christensen wrote:
> https://en.wikipedia.org/wiki/Click_of_death
But that's a different technology (and 20 years ago).
> If you cannot return the drive, I would download, install, and run
> "Data Lifeguard Diagnostic for Windows":
>
On 2020-07-24 12:14, Reco wrote:
Hi.
On Fri, Jul 24, 2020 at 08:49:42PM +0200, Thomas Schmitt wrote:
Hi,
i got my new computer with a 4 GB WD Gold (WDC WD4003FRYZ-01F0DB0) and
observe a strange behavior with a provisory Debian 10 LXDE installation.
If the drive has power then i makes
Hi.
On Fri, Jul 24, 2020 at 08:49:42PM +0200, Thomas Schmitt wrote:
> Hi,
>
> i got my new computer with a 4 GB WD Gold (WDC WD4003FRYZ-01F0DB0) and
> observe a strange behavior with a provisory Debian 10 LXDE installation.
>
> If the drive has power then i makes a plonking noise every
Thomas Schmitt wrote:
> If the drive has power then i makes a plonking noise every 3 to 5
> seconds. The plonk is louder when Debian runs, but can also be
> heard (and felt by direct finger contact with the disk) if only EFI
> is running. The sound is not really loud but well hearable when the
>
On 2020-07-24 11:49, Thomas Schmitt wrote:
Hi,
i got my new computer with a 4 GB WD Gold (WDC WD4003FRYZ-01F0DB0) and
observe a strange behavior with a provisory Debian 10 LXDE installation.
If the drive has power then i makes a plonking noise every 3 to 5
seconds. The plonk is louder when
Hi,
i got my new computer with a 4 GB WD Gold (WDC WD4003FRYZ-01F0DB0) and
observe a strange behavior with a provisory Debian 10 LXDE installation.
If the drive has power then i makes a plonking noise every 3 to 5
seconds. The plonk is louder when Debian runs, but can also be
heard (and felt by
writing the
email above as far as I remember. I just didn't know how long would it
take until it is approved :)
>>
> Thanks for all this help, guys. Does anyone have any thoughts on why one
> generation of an external disk cage wouldn't require this and just spun
> down the disks auto
> See man hd-idle for the details.
>
> One could also write to debian-backports, CC: the maintainer and ask
> nicely for a backport ;)
>
Thanks for all this help, guys. Does anyone have any thoughts on why one
generation of an external disk cage wouldn't require this and just spun
On Ma, 01 oct 19, 15:49:57, Alex Mestiashvili wrote:
>
> You may want to try hd-idle, it is not yet available in stable, but one
> can install it from testing (it is not advisable in general, but the
> divergence between buster and testing is not that big right now)
> wget it from
>
On 9/29/19 1:30 PM, Mark Fletcher wrote:
> Since a fresh install of buster, an external USB3 hard disk cage from
> Terramaster that I own is not automatically spinning down the disks in
> it when they go unused for a time.
>
> I used a previous generation of the cage with Str
> logger "Setting spindown on disk drive: $DISK_DEV"
> sdparm --flexible -6 --set SCT=4000 $DISK_DEV
> sdparm --flexible -6 --set STANDBY=1 $DISK_DEV
I found sdparm inscrutable so far so I'm really curious how you came up
with the above incantation (`sdparm -al /dev/sdb`
On Mon, Sep 30, 2019 at 01:41:37AM -0700, B wrote:
>
>
> On 9/29/19 4:30 AM, Mark Fletcher wrote:
> > Any thoughts on where I might look to find settings that can be tweaked
> > to make it spin down when idle?
>
>
> See sdparm and hdparm tools. hdparm is probably the wrong tool because it's
>
On 9/29/19 4:30 AM, Mark Fletcher wrote:
Any thoughts on where I might look to find settings that can be tweaked
to make it spin down when idle?
See sdparm and hdparm tools. hdparm is probably the wrong tool because
it's for internal drives connected to IDE/ATA/SATA busses. The reason
On Fri, Sep 27, 2019 at 01:41:58PM +0100, Jonathan Dowland wrote:
> On Mon, Sep 23, 2019 at 03:11:07PM +0500, Alexander V. Makartsev wrote:
> > If I understood this right, you have two disks with data and they were
> > previously configured as RAID1 volume.
> > What make\mod
Since a fresh install of buster, an external USB3 hard disk cage from
Terramaster that I own is not automatically spinning down the disks in
it when they go unused for a time.
I used a previous generation of the cage with Stretch previously, it
spun down the disks when they were not in use
On Mon, Sep 23, 2019 at 03:11:07PM +0500, Alexander V. Makartsev wrote:
If I understood this right, you have two disks with data and they were
previously configured as RAID1 volume.
What make\model RAID-controller do you use? Because "cages" by
themselves offer only SATA\SAS ports
On 9/24/19 7:00 AM, David wrote:
On Tue, 24 Sep 2019 at 07:50, Mark Fletcher wrote:
On Mon, Sep 23, 2019 at 03:07:15AM -, Debian Buster wrote:
2. create the parttion exactly as it was.
system. I recognise that was only that easy to do because I knew that
the original partition
On Tue, 24 Sep 2019 at 07:50, Mark Fletcher wrote:
> On Mon, Sep 23, 2019 at 03:07:15AM -, Debian Buster wrote:
> > 2. create the parttion exactly as it was.
> system. I recognise that was only that easy to do because I knew that
> the original partition arrangement was so simple. If it had
On Mon, Sep 23, 2019 at 03:07:15AM -, Debian Buster wrote:
>
> Posible Options:
> 1. if you use lilo, look for a copy of parttions table.
> 2. create the parttion exactly as it was.
I'm running GRUB not lilo -- used lilo back in the 90's but switched to
grub whenever Debian started
On 23.09.2019 3:40, Mark Fletcher wrote:
> Hello
>
> While setting up a newly purchased RAID-capable hard disk cage I've
> damaged the contents of 2 hard disks and want to know if it is possible
> to recover.
>
> The cage has 5 disk slots each occupied by 3TB hard disks. 4
On 23/09/2019 08:37, Debian Buster wrote:
On Sun, 22 Sep 2019 23:40:51 +0100, Mark Fletcher wrote:
Hello
While setting up a newly purchased RAID-capable hard disk cage I've
damaged the contents of 2 hard disks and want to know if it is possible
to recover.
The cage has 5 disk slots each
On Sun, 22 Sep 2019 23:40:51 +0100, Mark Fletcher wrote:
> Hello
>
> While setting up a newly purchased RAID-capable hard disk cage I've
> damaged the contents of 2 hard disks and want to know if it is possible
> to recover.
>
> The cage has 5 disk slots each occupied
Hello
While setting up a newly purchased RAID-capable hard disk cage I've
damaged the contents of 2 hard disks and want to know if it is possible
to recover.
The cage has 5 disk slots each occupied by 3TB hard disks. 4 of the
disks came from an older cage by the same maker (TerraMaster
Hi,
Thanks for your reply. After reboot, the problem is fixed. Frankly it is
strange as I had already rebooted, I just have one kernel release, but
well... it works.
Thanks for your help
Regards
Jean-Philippe MENGUAL
Le 28/06/2019 à 11:21, Reco a écrit :
> Hi.
>
> On Fri, Jun 28,
Hi.
On Fri, Jun 28, 2019 at 11:09:37AM +0200, Jean-Philippe MENGUAL wrote:
>
> Here is what I get now when I plug in a USB stick or disk:
> https://paste.debian.net/1089600/
>
> I think it is a bug, as I use Sid. Where should I report? Kernel? udev?
> systemd?
A relevant part of
Hi,
Here is what I get now when I plug in a USB stick or disk:
https://paste.debian.net/1089600/
I think it is a bug, as I use Sid. Where should I report? Kernel? udev?
systemd?
Best regards,
--
Jean-Philippe MENGUAL
Le 26/06/2019 à 20:15, Ross Boylan a écrit :
So do you think the chroot generated initrd would have been OK if I'd
mounted proc?
Yes.
On Tue, Jun 25, 2019 at 12:31 PM Pascal Hambourg wrote:
>
> Le 24/06/2019 à 01:40, Ross Boylan a écrit :
> >
> > # update-initramfs -u -k 4.19.0-5-amd64
> > update-initramfs: Generating /boot/initrd.img-4.19.0-5-amd64
> > /usr/share/initramfs-tools/hooks/cryptroot: 64:
> >
Le 24/06/2019 à 01:40, Ross Boylan a écrit :
# update-initramfs -u -k 4.19.0-5-amd64
update-initramfs: Generating /boot/initrd.img-4.19.0-5-amd64
/usr/share/initramfs-tools/hooks/cryptroot: 64:
/usr/share/initramfs-tools/hooks/cryptroot: cannot open /proc/mounts:
No such file
cryptsetup:
containing MODULES=dep) if you select "targeted".
> 4. Manually constructing or modifying an initrd is pretty delicate business.
> 5. systemd seems to react very badly to a mount failure in fstab.
One strategic decision I've made for years is to always have two root
partitions o
Ross Boylan wrote:
...
> LESSONS
> 1. Don't ever get in this situation. It's a mess.
> 2. If you're planning on moving to new hardware, ensure your initrd is
> generated with MODULES=most BEFORE the move.
> 3. Just because initramfs.conf has MODULES=most doesn't mean that's
> what you get. In my
ORIGINAL PROBLEM
Move disks into a new system (hardware) and found I couldn't boot into
the old system (OS on the disk). The initrd couldn't even see the
drives.
CAUSE
New hardware required drivers missing from the initrd, which had been
created with MODULES=dep. Also, fstab referenced a drive
]: Samsung Electronics Co
Ltd NVMe SSD Controller SM961/PM961 [144d:a804]
First try: chroot into the old system and update-initramfs. The
result can see the disks (yay) but has no, or at least insufficient,
crypto.
Second try: rsync from the old initrd to the new one.* Considered
using -u
Ross Boylan wrote:
> In brief: moved all the 3.5" disks from an old system to a new one,
> and now I can't boot into buster. In the initrd environment no disks
> appear in /dev; the disks are all connected through an LSI Host Bus
> Adapter card (only on the new system). I can
Le 23/06/2019 à 04:06, Ross Boylan a écrit :
My leading suspect for why the hard disks aren't recognized is that
they are all attached through
02:00.0 Serial Attached SCSI controller: LSI Logic / Symbios Logic
SAS2116 PCI-Express Fusion-MPT SAS-2 [Meteor] (rev 02)
whereas before they were
In brief: moved all the 3.5" disks from an old system to a new one,
and now I can't boot into buster. In the initrd environment no disks
appear in /dev; the disks are all connected through an LSI Host Bus
Adapter card (only on the new system). I can boot into Ubuntu on the
new system, and
Le 11/05/2019 à 14:03, An Liu a écrit :
Hi, Pascal,
I personally didn't use --removable option while install GRUB,
but built a standalone EFI boot executable for grub (say,
grubx64.efi with modules),it could be copied everywhere you want,
and it do safe me many times.
Are these two the same
Hi, Pascal,
I personally didn't use --removable option while install GRUB,
but built a standalone EFI boot executable for grub (say,
grubx64.efi with modules),it could be copied everywhere you want,
and it do safe me many times.
Are these two the same thing?
> My method of choice would be to
Le 11/05/2019 à 12:18, Rory Campbell-Lange a écrit :
I wish to configure a server with two SATA connected SSD disks in
software RAID1.
I'm not sure how best to configure the disks for boot, particularly in
UEFI boot mode, to ensure that /dev/sdb can be booted if (for example)
/dev/sda fails.
I
I wish to configure a server with two SATA connected SSD disks in
software RAID1.
I'm not sure how best to configure the disks for boot, particularly in
UEFI boot mode, to ensure that /dev/sdb can be booted if (for example)
/dev/sda fails.
I have been configuring each disk as follows:
Device
On 25.03.2019 0:50, Mimiko wrote:
> hello.
>
> I came across a problem when booting. In the server is installed 4
> disks connected to raid controller and 2 disks connected to
> motherboard sata interfaces. During booting disks connected to raid
> controller are detected before
Mimiko wrote:
> On 25.03.2019 21:23, Sven Hartge wrote:
>> Please update your system, Debian 6.0 Squeeze is severely out of date
>> and should NOT be used for any production systems and even less when
>> connected to the public Internet.
> Yes I know. But this is not the answer I'm searching.
On 25.03.2019 21:23, Sven Hartge wrote:
Please update your system, Debian 6.0 Squeeze is severely out of date
and should NOT be used for any production systems and even less when
connected to the public Internet.
Yes I know. But this is not the answer I'm searching.
> Please update your system, Debian 6.0 Squeeze is severely out of date
> and should NOT be used for any production systems and even less when
> connected to the public Internet.
It certainly should not be connected to the Internet but there is no
reason not to keep using it for off-line systems
Mimiko wrote:
> Linux 2.6.32-5-amd64 #1 SMP Tue May 13 16:34:35 UTC 2014 x86_64 GNU/Linux
Please update your system, Debian 6.0 Squeeze is severely out of date
and should NOT be used for any production systems and even less when
connected to the public Internet.
Grüße,
Sven
--
Sigmentation
On 24.03.2019 22:46, Sven Hartge wrote:
I came across a problem when booting. In the server is installed 4
disks connected to raid controller and 2 disks connected to
motherboard sata interfaces. During booting disks connected to raid
controller are detected before md raid assembling process
Mimiko wrote:
> I came across a problem when booting. In the server is installed 4
> disks connected to raid controller and 2 disks connected to
> motherboard sata interfaces. During booting disks connected to raid
> controller are detected before md raid assembling process, while t
1 - 100 of 1243 matches
Mail list logo