Hi Marc,
On 20/05/24 at 14:35, Marc SCHAEFER wrote:
3. grub BOOT FAILS IF ANY LV HAS dm-integrity, EVEN IF NOT LINKED TO /
if I reboot now, grub2 complains about rimage issues, clear the screen
and then I am at the grub2 prompt.
Booting is only possible with Debian rescue, disabling the
the exact address
where the kernel & initrd was, regardless of abstractions layers :->)
Recently, I have been playing with RAID-on-LVM (I was mostly using LVM
on md before, which worked with grub), and it works too.
Where grub fails, is if you have /boot on the same LVM volume group
where
> I found this [1], quoting: "I'd also like to share an issue I've
> discovered: if /boot's partition is a LV, then there must not be a
> raidintegrity LV anywhere before that LV inside the same VG. Otherwise,
> update-grub will show an error (disk `lvmid/.../...' not found) and GRUB
> cannot
Hello,
On Wed, May 22, 2024 at 10:13:06AM +, Andy Smith wrote:
> metadata tags to some PVs prevented grub from assembling them,
grub is indeed very fragile if you use dm-integrity anywhere on any of
your LVs on the same VG where /boot is (or at least if in the list
of LVs, the dm-integrity
Hello,
On Wed, May 22, 2024 at 08:57:38AM +0200, Marc SCHAEFER wrote:
> I will try this work-around and report back here. As I said, I can
> live with /boot on RAID without dm-integrity, as long as the rest can be
> dm-integrity+raid protected.
I'm interested in how you get on.
I d
Hello,
On Wed, May 22, 2024 at 08:57:38AM +0200, Marc SCHAEFER wrote:
> I will try this work-around and report back here. As I said, I can
> live with /boot on RAID without dm-integrity, as long as the rest can be
> dm-integrity+raid protected.
So, enable dm-integrity on all LVs,
work-around and report back here. As I said, I can
live with /boot on RAID without dm-integrity, as long as the rest can be
dm-integrity+raid protected.
[1]
https://unix.stackexchange.com/questions/717763/lvm2-integrity-feature-breaks-lv-activation
ysetup (from LUKS), but LVM RAID PVs -- I don't use
LUKS encryption anyway on that system
2) the issue is not the kernel not supporting it, because when the
system is up, it works (I have done tests to destroy part of the
underlying devices, they get detected and fixed correctly)
3) the
On 20/05/24 at 14:35, Marc SCHAEFER wrote:
Any idea what could be the problem? Any way to just make grub2 ignore
the rimage (sub)volumes at setup and boot time? (I could live with / aka
vg1/root not using dm-integrity, as long as the data/docker/etc volumes
are integrity-protected) ? Or how
Hello,
1. INITIAL SITUATION: WORKS (no dm-integrity at all)
I have a Debian bookwork uptodate system that boots correctly with
kernel 6.1.0-21-amd64.
It is setup like this:
- /dev/nvme1n1p1 is /boot/efi
- /dev/nvme0n1p2 and /dev/nvme1n1p2 are the two LVM physical volumes
- a volume
Bonjour Thierry,
En effet, mon erreur (la deuxième en fait ;-)) a été de re-créer
"directement" les 2 groupes... J'aurais pu les ré-importer en mode
"foreign". Ca me servira pour la prochaine fois...
Je vais faire des tests complémentaires avec l'utilitaire de la car
Bonjour,
je fais des bricoles sur des serveurs Dell, mais rien d'extraordinaire.
De mémoire, on ne recrée pas un raid, mais on l'importe (un truc dans le
genre "... foreign ...").
En général, quand on démarre le serveur avec des disques inconnus, s'il
voit un raid compatible, il
Le 18/12/2023 à 10:47, David BERCOT a écrit :
Bonjour Didier,
Je te remercie pour ton mail mais tu cibles ici l'utilisation d'une
solution RAID logicielle avec Mdadm.
Me concernant, il s'agit de RAID matériel avec une carte spécifique.
Mais je vais quand même regarder au cas où il y aurait
Bonjour Didier,
Je te remercie pour ton mail mais tu cibles ici l'utilisation d'une
solution RAID logicielle avec Mdadm.
Me concernant, il s'agit de RAID matériel avec une carte spécifique.
Mais je vais quand même regarder au cas où il y aurait quelque chose à
prendre...
Bien cordialement
Le 18/12/2023 à 10:08, David BERCOT a écrit :
[...]
J'ai voulu créer un 3ème groupe RAID et, au moment d'appliquer la
configuration, le système a supprimé mes 2 premiers groupes !!!
Je les ai re-créés mais, malheureusement, il ne retrouve pas ses petits
(et ne boote même pas sur Debian
, sur un serveur (Dell PowerEdge R540 avec carte PERC H330), j'avais
la configuration suivante :
- groupe RAID0 : Debian
- groupe RAID5 : données
J'ai voulu créer un 3ème groupe RAID et, au moment d'appliquer la
configuration, le système a supprimé mes 2 premiers groupes !!!
Je les ai re-créés mais
files that glow blue. ;-)
My files glow Greene so I am safe
>
>
>>> On 12/13/23 10:42, Pocket wrote:
>>>> After removing raid, I completely redesigned my network to be more inline
>>>> with the howtos and other information.
>>>
>>> Plea
), restoring from the snapshot should
produce a set of files that work correctly.
Radioactive I see
Do not eat files that glow blue. ;-)
On 12/13/23 10:42, Pocket wrote:
After removing raid, I completely redesigned my network to be more inline with
the howtos and other information.
Please
Sent from my iPad
> On Dec 14, 2023, at 4:09 AM, David Christensen
> wrote:
>
> On 12/13/23 08:51, Pocket wrote:
>> I gave up using raid many years ago and I used the extra drives as backups.
>> Wrote a script to rsync /home to the backup drives.
>
>
>
On 12/13/23 08:51, Pocket wrote:
I gave up using raid many years ago and I used the extra drives as
backups.
Wrote a script to rsync /home to the backup drives.
While external HDD enclosures can work, my favorite is mobile racks:
https://www.startech.com/en-us/hdd/drw150satbk
https
El 23/7/23 a les 10:19, Alex Muntada ha escrit:
A on vull anar a parar amb tot això és a, si resulta que un
capçal llegeix i escriu, per exemple, 4 MiB a cada demanda, és
molt ineficient establir trossos/chunks de RAID de 512 KiB, ja
que el sistema operatiu demanarà 8 vegades la mateixa operació
tal·lació amb un programet (script)
> prescindint de DebianInstaller, però el què ara estic explorant
> és 1 sola personalització per a seguir amb DebianInstaller quan
> no vull complicar res més.
A les configuracions de preseed del d-i per al raid no he vist
que s'hi pugui posar la mida,
sistema operatiu, sinó l'òptima
per al dispositiu físic. És a dir, que si el capçal d'un disc
dur escriu com a mínim 2048 KiB, aleshores seria molt ineficient
establir blocs/chunks de 512 KiB a la capra RAID, perquè el
sistema operatiu podria demanar escriure 4 vegades el mateix
segment de disc per
apçal d'un disc
> dur escriu com a mínim 2048 KiB, aleshores seria molt ineficient
> establir blocs/chunks de 512 KiB a la capra RAID, perquè el
> sistema operatiu podria demanar escriure 4 vegades el mateix
> segment de disc per a emplenar-lo de dades independents de
> 512 KiB cada t
operacions físiques.
No busco la mida òptima per al sistema operatiu, sinó l'òptima per al
dispositiu físic. És a dir, que si el capçal d'un disc dur escriu com a
mínim 2048 KiB, aleshores seria molt ineficient establir blocs/chunks de
512 KiB a la capra RAID, perquè el sistema operatiu podria demanar
de bloc del RAID amb el
> DebianInstaller.
A la documentació no trobo enlloc que es pugui canviar la mida de
bloc (o chunk, com li diu mdadm). He seguit aquest camí:
https://wiki.debian.org/DebianInstaller/Preseed
https://www.debian.org/releases/stable/amd64/apbs04.en.html
https://salsa.debian.org
Bona tarda,
De vegades faig una instal·lació de Debian en un ordinador amb varis
discs durs de mida similar i, apart d'una partició per a /boot al disc
d'arrencada, amb la resta dels espais estableixo un conjunt RAID de
programari.
Sovint estableixo RAID0 per accelerar equips antics o
On 7/2/23 13:11, Mick Ab wrote:
On 19:58, Sun, 2 Jul 2023 David Christensen
On 7/2/23 10:23, Mick Ab wrote:
I have a software RAID 1 array of two hard drives. Each of the two disks
contains the Debian operating system and user data.
I am thinking of changing the motherboard because
On 02.07.2023 22:23, Mick Ab wrote:
I have a software RAID 1 array of two hard drives. Each of the two
disks contains the Debian operating system and user data.
I am thinking of changing the motherboard because of problems that
might be connected to the current motherboard. The new
On 19:58, Sun, 2 Jul 2023 David Christensen
> On 7/2/23 10:23, Mick Ab wrote:
> > I have a software RAID 1 array of two hard drives. Each of the two disks
> > contains the Debian operating system and user data.
> >
> > I am thinking of changing the motherboard bec
On 7/2/23 10:23, Mick Ab wrote:
I have a software RAID 1 array of two hard drives. Each of the two disks
contains the Debian operating system and user data.
I am thinking of changing the motherboard because of problems that might be
connected to the current motherboard. The new motherboard
On Sun, 2 Jul 2023 18:23:31 +0100
Mick Ab wrote:
> I am thinking of changing the motherboard because of problems that
> might be connected to the current motherboard. The new motherboard
> would be the same make and model as the current motherboard.
>
> Would I need to recreate t
I have a software RAID 1 array of two hard drives. Each of the two disks
contains the Debian operating system and user data.
I am thinking of changing the motherboard because of problems that might be
connected to the current motherboard. The new motherboard would be the same
make and model
Tim Woodall (12023-03-17):
> Yes. It's possible. Took me about 5 minutes to work out the steps. All
> of which are already mentioned upthread.
All of them, except one.
> mdadm --build ${md} --level=raid1 --raid-devices=2 ${d1} missing
Until now, all suggestions with mdadm started wit
nually fail a disk then store it in a safe deposit box or
> > > > something as
> > > > a backup, but I have not gotten around to it.
> > > >
> > > > It sounds to me like adding an iSCSI volume (e.g. from AWS) to the RAID
> > > > as
>
(plus a hot
spare). On top of that is LUKS, and on top of that is LVM. I keep meaning
to manually fail a disk then store it in a safe deposit box or something as
a backup, but I have not gotten around to it.
It sounds to me like adding an iSCSI volume (e.g. from AWS) to the RAID as
an additional
; > spare). On top of that is LUKS, and on top of that is LVM. I keep meaning
> > to manually fail a disk then store it in a safe deposit box or something as
> > a backup, but I have not gotten around to it.
> >
> > It sounds to me like adding an iSCSI volume (e.g. from AWS)
On 3/17/23 12:36, Gregory Seidman wrote:
On Fri, Mar 17, 2023 at 06:00:46PM +0300, Reco wrote:
[...]
PS There's that old saying, "RAID is not a substitute for a backup".
What you're trying to do sounds suspiciously similar to an old "RAID
split-mirror" backup te
red
mdadm --build ${md} --level=raid1 --raid-devices=2 ${d1} missing
echo "Mounting single disk raid"
mount ${md} /mnt/fred
ls -al /mnt/fred
mdadm ${md} --add ${d2}
sleep 10
echo "Done sleeping - sync had better be done!"
mdadm ${md} --fail ${d2}
mdadm ${md} --remove ${d2
Gregory Seidman wrote:
> On Fri, Mar 17, 2023 at 06:00:46PM +0300, Reco wrote:
> [...]
> > PS There's that old saying, "RAID is not a substitute for a backup".
> > What you're trying to do sounds suspiciously similar to an old "RAID
> > spli
Nicolas George (12023-03-17):
> It is not vagueness, it is genericness: /dev/something is anything and
> contains anything, and I want a solution that works for anything.
Just to be clear: I KNOW that what I am asking, the ability to
synchronize an existing block device onto another over the
Greg Wooledge (12023-03-17):
> > I have a block device on the local host /dev/something with data on it.
^^^
There. I have data, therefore, any solution that assumes the data is not
there can only be proposed by somebody who
On Fri, Mar 17, 2023 at 06:00:46PM +0300, Reco wrote:
[...]
> PS There's that old saying, "RAID is not a substitute for a backup".
> What you're trying to do sounds suspiciously similar to an old "RAID
> split-mirror" backup technique. Just saying.
This thread has
On Fri, Mar 17, 2023 at 05:01:57PM +0100, Nicolas George wrote:
> Dan Ritter (12023-03-17):
> > If Reco didn't understand your question, it's because you are
> > very light on details.
>
> No. Reco's answers contradict the very first sentence of my first
> e-mail.
The first sentence of your
On Fri, 17 Mar 2023, Nicolas George wrote:
Dan Ritter (12023-03-17):
If Reco didn't understand your question, it's because you are
very light on details.
No. Reco's answers contradict the very first sentence of my first
e-mail.
Is this possible?
How can Reco's answers contradict that.
Dan Ritter (12023-03-17):
> If Reco didn't understand your question, it's because you are
> very light on details.
No. Reco's answers contradict the very first sentence of my first
e-mail.
--
Nicolas George
Nicolas George wrote:
> Reco (12023-03-17):
> > Well, theoretically you can use Btrfs instead.
>
> No, I cannot. Obviously.
>
> > What you're trying to do sounds suspiciously similar to an old "RAID
> > split-mirror" backup technique.
>
>
Reco (12023-03-17):
> Well, theoretically you can use Btrfs instead.
No, I cannot. Obviously.
> What you're trying to do sounds suspiciously similar to an old "RAID
> split-mirror" backup technique.
Absolutely not.
If you do not understand the question, it is okay to not ans
implementing mdadm + iSCSI + ext4 would be probably the
best way to achieve whatever you want to do.
PS There's that old saying, "RAID is not a substitute for a backup".
What you're trying to do sounds suspiciously similar to an old "RAID
split-mirror" backup technique. Just saying.
Reco
Reco (12023-03-17):
> Yes, it will destroy the contents of the device, so backup
No. If I accepted to have to rely on an extra copy of the data, I would
not be trying to do something complicated like that.
--
Nicolas George
ering"
(syncronization between mirror sides) concerns only actual data residing
in a zpool. I.e. if you have 1Tb mirrored zpool which is filled to 200Gb
you will resync 200Gb.
In comparison, mdadm RAID resync will happily read 1Tb from one drive
and write 1Tb to another *unless* you're using
/md0 --level=mirror --force --raid-devices=1 \
> --metadata=1.0 /dev/local_dev missing
>
> --metadata=1.0 is highly important here, as it's one of the few mdadm
> metadata formats that keeps said metadata at the end of the device.
Well, I am sorry to report that you did not read my
sor architecture restrictions, and somewhat unusual design
decisions for the filesystem storage.
So let's keep it on MDADM + iSCSI for now.
> What I want to do:
>
> 1. Stop programs and umount /dev/something
>
> 2. mdadm --create /dev/md0 --level=mirror --force --raid-devices=1 \
>
).
What I want to do:
1. Stop programs and umount /dev/something
2. mdadm --create /dev/md0 --level=mirror --force --raid-devices=1 \
--metadata-file /data/raid_something /dev/something
→ Now I have /dev/md0 that is an exact image of /dev/something, with
changes on it synced instantaneously.
3
On 2/23/23 11:05, Tim Woodall wrote:
On Wed, 22 Feb 2023, Nicolas George wrote:
Is there a solution to have a whole-disk RAID (software, mdadm) that is
also partitioned in GPT and bootable in UEFI?
I've wanted this ...
I think only hardware raid where the bios thinks it's a single disk
On Wed, 22 Feb 2023, Nicolas George wrote:
Hi.
Is there a solution to have a whole-disk RAID (software, mdadm) that is
also partitioned in GPT and bootable in UEFI?
I've wanted this but settled for using dd to copy the start of the disk,
fdisk to rewrite the GPT properly then mdadm
Hello,
I have seen some installations with following setup:
GPT
sda1 sdb1 bios_grub md1 0.9
sda2 sdb2 efi md2 0.9
sda3 sdb3 /boot md3 0.9
sda4 sdb4 / md? 1.1
on such installations it's important, that grub installation is made
with "grub-install --removable"
I mean it was some grub bugs about
Am 22.02.2023 um 17:07 schrieb Nicolas George:
> Unfortunately, that puts the partition table
> and EFI partition outside the RAID: if you have to add/replace a disk,
> you need to partition and reinstall GRUB, that makes a few more
> manipulations on top of syncing the RAID.
Yes, i g
Nicolas George wrote:
> Hi.
>
> Is there a solution to have a whole-disk RAID (software, mdadm) that is
> also partitioned in GPT and bootable in UEFI?
Not that I know of. An EFI partition needs to be FAT32 or VFAT.
What I think you could do:
Partition the disks with GPT: 2 par
e to make an USB
stick that was bootable in legacy mode, bootable in UEFI mode and usable
as a regular USB stick (spoiler: it worked, until I tried it with
Windows.)
But it will not help for this issue.
> The only issue, i have had a look at, was the problem to have a raid,
> that is bootable
up (not use them at all) and unfortunately that
applies to standard GPT tools as well, but the dual bootability can
solve some problems.
The only issue, i have had a look at, was the problem to have a raid,
that is bootable no matter which one of the drives initially fails, a
problem, that
Hi.
Is there a solution to have a whole-disk RAID (software, mdadm) that is
also partitioned in GPT and bootable in UEFI?
What I imagine:
- RAID1, mirroring: if you ignore the RAID, the data is there.
- The GPT metadata is somewhere not too close to the beginning of the
drive nor too close
Le mercredi 4 janvier 2023 à 05:40:06 UTC+1, Olivier backup my spare a écrit :
> Bonjour
>
> J'ai récupéré le PC de ma mère.
> J'ai mis une carte RAID chinoise (raid O, 1, 5, 10)
> Problème. 2 disk 2 To et 2 disque 4 To
> J'ai fait une grappe 2 to et une grappe 4 To en raid
Le 25 janvier 2023 Daniel Caillibaud a écrit :
> L'intérêt du raid0 c'est de multiplier par 2 (quasiment) les perfs disques
> (avec deux volumes
> dans le raid0), et j'avais compris qu'il fallait des volumes de tailles
> voisines pour conserver
> ça.
Oui c'est vrai, pour éviter de remplir une
Le 24/01/23 à 13:26, Michel Verdier a écrit :
> En fait il faut 2 *partitions* de même taille pour chaque paire
> raid1. Mais on peut avoir des tailles différentes pour le raid0.
On peut, mais ça dégrade les perfs non ?
L'intérêt du raid0 c'est de multiplier par 2 (quasiment) les perfs disques
Le 24 janvier 2023 Daniel Caillibaud a écrit :
> Pour du raid10, il me semble qu'il faut 4 disques de même taille.
>
> Il vaut mieux faire d'abord les paires de raid1 puis le raid0 entre deux
> paires (sinon avec la
> perte d'un seul disque tu perds tout).
En fait il faut 2 *partitions* de même
Le 04/01/23 à 05:37, Olivier backup my spare a
écrit :
> Bonjour
>
> J'ai récupéré le PC de ma mère.
> J'ai mis une carte RAID chinoise (raid O, 1, 5, 10)
> Problème. 2 disk 2 To et 2 disque 4 To
> J'ai fait une grappe 2 to et une grappe 4 To en raid 0
> Là, la carte refus
Le 4 janvier 2023 Olivier backup my spare a écrit :
> Là, la carte refuse de faire du raid 10
>
> Puis je le faire avec la debian. Je n'ai jamais fait de raid logiciel sous
> linux, alors je demande.
Oui ça marche très bien avec mdadm.
Bonjour
J'ai récupéré le PC de ma mère.
J'ai mis une carte RAID chinoise (raid O, 1, 5, 10)
Problème. 2 disk 2 To et 2 disque 4 To
J'ai fait une grappe 2 to et une grappe 4 To en raid 0
Là, la carte refuse de faire du raid 10
Puis je le faire avec la debian. Je n'ai jamais fait de raid logiciel
computers.
When I boot the flash drive in a Dell Precision 3630 Tower that has Windows
11 Pro installed on the internal NVMe drive, the internal PCIe NVMe drive is
not visible to Linux:
The work-around is to change CMOS Setup -> System Configuration -> SATA
Operation from "RAID On: to "
acity storage costs to a
minimum."
I believe that is marketing speak for "the computer supports Optane
Memory", not "every machine comes with Optane Memory".
I believe that's the pseudo-RAID you are seeing in the UEFI setup screen.
Maybe you can see the physical driv
1 11.2G 0 part
> `-sda4_crypt 254:00 11.2G 0 crypt /
> sr0 11:01 1024M 0 rom
>
> 2022-12-23 18:46:19 root@laalaa ~/laalaa.tracy.holgerdanske.com
> # l /dev/n*
> /dev/null /dev/nvram
>
> /dev/net:
> ./ ../ tun
>
>
> The work-around
s/dfb/p/precision-3630-workstation/pd,
the machine has Optane. I believe that's the pseudo-RAID you are
seeing in the UEFI setup screen.
Maybe you can see the physical drives using raid utilities.
Jeff
boot the flash drive in a Dell Precision 3630 Tower that has
Windows 11 Pro installed on the internal NVMe drive, the internal PCIe
NVMe drive is not visible to Linux:
The work-around is to change CMOS Setup -> System Configuration -> SATA
Operation from "RAID On: to "AHC
t
>`-sda4_crypt 254:00 11.2G 0 crypt /
> sr0 11:01 1024M 0 rom
>
> 2022-12-23 18:46:19 root@laalaa ~/laalaa.tracy.holgerdanske.com
> # l /dev/n*
> /dev/null /dev/nvram
>
> /dev/net:
> ./ ../ tun
>
>
> The work-around is to change CMOS
11.2G 0 crypt /
sr0 11:01 1024M 0 rom
2022-12-23 18:46:19 root@laalaa ~/laalaa.tracy.holgerdanske.com
# l /dev/n*
/dev/null /dev/nvram
/dev/net:
./ ../ tun
The work-around is to change CMOS Setup -> System Configuration -> SATA
Operation from "RAID On: to "AHC
Am 10.11.2022 14:40, schrieb Curt:
(or maybe a RAID array is
conceivable over a network and a distance?).
Not only conceivable, but indeed practicable: Linbit DRBD
Hi Gary,
On Mon, Aug 22, 2022 at 10:00:34AM -0400, Gary Dale wrote:
> I'm running Debian/Bookworm on an AMD64 system. I recently added a second
> drive to it for use in a RAID1 array.
What was the configuration of the array before you added the new
drive? Was it a RAID-1 with one missing
2 blocks super 1.2 level 5, 64k chunk, algorithm 2 [3/3] [UUU]
bitmap: 0/30 pages [0KB], 65536KB chunk
unused devices:
root@hawk:~#
You may notice from my output that I have raid on 0, 1, 3, and 4. 4 is
the spare. 3 and 4 are not in numeric order. And there is no 2. So I'm
not sure that the fa
On 8/22/22, Gary Dale wrote:
> I'm running Debian/Bookworm on an AMD64 system. I recently added a
> second drive to it for use in a RAID1 array. However I'm now getting
> regular messages about "SparesMissing event on...".
>
> cat /proc/mdstat shows the problem: active raid1 sda1[0] sdb1[2] - the
I'm running Debian/Bookworm on an AMD64 system. I recently added a
second drive to it for use in a RAID1 array. However I'm now getting
regular messages about "SparesMissing event on...".
cat /proc/mdstat shows the problem: active raid1 sda1[0] sdb1[2] - the
newly added drive is showing up as
Hola!
Tengo 3 servidores DEBIAN 11 con RAID 5 y su Spare, 1 con RAID 1 y su Spare,
todos con SSD, que salen más baratos que los SAS.
Por ahora ningún problema.
SLdos,
Marcelo.-
-Mensaje original-
De: Debian [mailto:javier.debian.bb...@gmail.com]
Enviado el: martes, 19 de julio de 2022
fallan todos a la vez te hacen una avería.
Un saludo.
ESO NO ME GUSTA.
¿No te acordás la marca/modelo?
Justamente en un RAID uno siempre trata de usar misma marca, modelo y en
lo posible, serie, para que la velocidad de todos sea exactamente la misma.
JAP
Lo encontré:
https
saludo.
ESO NO ME GUSTA.
¿No te acordás la marca/modelo?
Justamente en un RAID uno siempre trata de usar misma marca, modelo y en
lo posible, serie, para que la velocidad de todos sea exactamente la misma.
JAP
wrote:
> El 18/7/22 a las 15:05, Camaleón escribió:
> > El 2022-07-18 a las 10:21 -0300, Debian escribió:
> >
> >> Buenos días.
> >>
> >> Esta pregunta va para Camaleón, pues creo que es la única persona que
> tiene
> >> experiencia en el te
El 18/7/22 a las 15:05, Camaleón escribió:
El 2022-07-18 a las 10:21 -0300, Debian escribió:
Buenos días.
Esta pregunta va para Camaleón, pues creo que es la única persona que tiene
experiencia en el tema de los que suelen leer por acá.
Con RAID mi experiencia es escasa (sólo he trabajado
El 2022-07-18 a las 10:21 -0300, Debian escribió:
> Buenos días.
>
> Esta pregunta va para Camaleón, pues creo que es la única persona que tiene
> experiencia en el tema de los que suelen leer por acá.
Con RAID mi experiencia es escasa (sólo he trabajado con hardware raid,
nive
Buenos días.
Esta pregunta va para Camaleón, pues creo que es la única persona que
tiene experiencia en el tema de los que suelen leer por acá.
Tengo que reemplazar un pequeño servidor con RAID-5 por software sobre
un Debian con 4 discos HDD de 1Tb cada uno que está tirando desde hace
ya 4
Le 20/10/2021 à 06:43, Jean-Michel OLTRA a écrit :
Bonjour,
Le mardi 19 octobre 2021, Kohler Gerard a écrit...
Le problème : je ne me rappelle plus quel est le Debian qui gère le Grub, ni
sur quel disque et quelle partition il est installé.
comment faire pour avoir ces réponses ?
@lists.debian.org
Envoyé: Wed, 20 Oct 2021 12:38:55 +0200 (CEST)
Objet: Re: grub2 uefi et raid
Tu t'es aventuré dans une installation compliquée.
Pour ma part j'ai arrêté de faire du multiboot géré par un OS ça finit
toujours mal lors d'une réinstallation d'un OS.
Sur mon ordi fixe qui a plusieurs OS
Tu t'es aventuré dans une installation compliquée.
Pour ma part j'ai arrêté de faire du multiboot géré par un OS ça finit
toujours mal lors d'une réinstallation d'un OS.
Sur mon ordi fixe qui a plusieurs OS, lorsque je fais une installation
je débranche tous les disques non concernés, puis
Bonjour,
Le mardi 19 octobre 2021, Kohler Gerard a écrit...
> Le problème : je ne me rappelle plus quel est le Debian qui gère le Grub, ni
> sur quel disque et quelle partition il est installé.
>
> comment faire pour avoir ces réponses ?
Avec `fdisk -l` ou au menu de grub lors du
bonjour,
voulant installer la version testing je me heurte à un problème bête :
Ma machine est dotée d'UEFI.
j'ai 3 DD dont les deux premiers sont montés en RAID1 (/dev/sda et
/dev/sdb) et ils contiennent mes données, le troisième (/dev/sdc) est
mon disque système.
sur mon disque système
Many Thanks for the very helpful reply, Reco!
--
Felix Natter
debian/rules!
Thanks Sven!
--
Felix Natter
debian/rules!
Many Thanks for the very helpful reply Andy!
--
Felix Natter
debian/rules!
Darac Marjal writes:
> On 11/09/2021 17:55, Felix Natter wrote:
>> hello fellow Debian users,
>>
>> I have an SSD for the root filesystem, and two HDDs using RAID1 for
>> /storage running Debian10. Now I need a plan B in case the upgrade
>> fails.
>
> Just want to check that you've not missed
hi Andrei,
Andrei POPESCU writes:
thank you for your answer.
> On Sb, 11 sep 21, 18:55:56, Felix Natter wrote:
>> hello fellow Debian users,
>>
>> I have an SSD for the root filesystem, and two HDDs using RAID1 for
>> /storage running Debian10. Now I need a plan B in case the upgrade
>>
On Tuesday 14 September 2021 12:55:41 Dan Ritter wrote:
> Gene Heskett wrote:
> > This is interesting and I will likely do it when I install the
> > debian-11.1-net-install I just burnt.
> >
> > But, I have installed 4, 1 terabyte samsung SSD's on a separate
> >
Gene Heskett wrote:
> This is interesting and I will likely do it when I install the
> debian-11.1-net-install I just burnt.
>
> But, I have installed 4, 1 terabyte samsung SSD's on a separate non-raid
> controller card which I intend to use as a software raid-6 or 10
1 - 100 of 7825 matches
Mail list logo