Re: RAID-1 and disk I/O

2021-07-18 Thread rhkramer
On Sunday, July 18, 2021 09:37:53 AM David wrote:
> On Sun, 18 Jul 2021 at 21:08,  wrote:
> > Interesting -- not surprising, makes sense, but something (for me, at
> > least) to keep in mind -- probably not a good idea to run on an old
> > drive that hasn't been backed up.
> 
> Sorry if my language was unclear. If you read the manpage context, it's
> explaining that drives can be tested without taking them out of service.
> So performance is only "degraded" while the test is running, compared
> to normal operation, because the drive is also busy testing itself.
> It doesn't mean permanent degradation.

Ahh, ok -- thanks for the clarification!



Re: RAID-1 and disk I/O

2021-07-18 Thread David Christensen

On 7/18/21 2:29 PM, Urs Thuermann wrote:

David Christensen  writes:


You should consider upgrading to Debian 10 -- more people run that and
you will get better support.


It's on my TODO list.  As well as upgrading the very old hardware.
Currently, it's a Gigabyte P35-DS3L with an Intel Core2Duo E8400 CPU
and 8 GB RAM.  It's only my private home server and performance is
still sufficient but I hope to reduce power consumption considerably.



I ran Debian on desktop hardware as a SOHO server for many years, but 
grew concerned about bit rot.  So, I migrated to low-end enterprise 
hardware and FreeBSD with ZFS.  The various SATA battles made things 
tougher than they should have been, but I fixed several problems and 
everything is now stable.




# diff -U20 <(smartctl -x /dev/sda) <(smartctl -x /dev/sdb)



Why limit unified context to 20 lines?  You may be missing information
(I have not counted the differences, below).  I suggest '-U' alone.


20 lines are just enough to get all.  You can see this because there
are less than 20 context lines at the beginning and end of the diff
and only one hunk.  GNU diff doesn't allow -U without a line count.



Sorry -- I do not use the -U option and misread the diff(1) man page.



Yes, the old Gigabyte mainboard has only 3 Gbps ports.  I wasn't aware
of this but have just looked up the specs.



SATA2 should be plenty for Seagate ST2000DM001 drives.  Two PCIe x1 
SATA3 HBA's or one PCIe x2+ SATA3 HBA might improve performance slightly 
under specific workloads, but I would just stay with motherboard SATA2 
ports (unless you find problems with them).




And the server is about 8 years old, initially with only 1 hard drive
which crashed while my backup was too small to hold everything.  This
meant a lot of work (and quite some money) to get everything running
again and to recover data which wasn't in the backup.



I think we have all been burned by trying to "make do" with inadequate 
backup devices.  I threw money at the problem after my last significant 
data loss, and now have backups several drives deep.  The funny thing 
is: when you're prepared, the gremlins know it and stay away.  ;-)




The smartctl(8) RAW_VALUE column is tough to read.  Sometimes it looks
like an integer.  Other times, it looks like a bitmap or big-endian/
little-endian mix-up.  The VALUE column is easier.  Both 119 and 117
are greater than 100, so I would not worry.


Hm, in some cases the RAW_VALUE looked somehow "more readable, and the
VALUE looked suspicous to me.  And here I found the explanation in the
smartctl(8) man page:

 Each Attribute has a "Raw" value, printed under the heading
 "RAW_VALUE", and a "Normalized" value printed under the
 heading "VALUE".
 [...]
 Each vendor uses their own algorithm to convert this "Raw"
 value to a "Normalized" value in the range from 1 to 254.
 [...]
 So to summarize: the Raw Attribute values are the ones that
 might have a real physical interpretation, such as
 "Temperature Celsius", "Hours", or "Start-Stop Cycles".



Thank you for the clarification.  As usual, I am guilty of inadequate 
RTFM...




Thanks for all your answers, hints, suggestions.  With that, and
reading the man page more carefully (mostly motivate by your and
other's answers) I learned quite a lot new about SMART and how to
use/read it.



YW.  I am learning too.


David



Re: RAID-1 and disk I/O

2021-07-18 Thread Urs Thuermann
David Christensen  writes:

> You should consider upgrading to Debian 10 -- more people run that and
> you will get better support.

It's on my TODO list.  As well as upgrading the very old hardware.
Currently, it's a Gigabyte P35-DS3L with an Intel Core2Duo E8400 CPU
and 8 GB RAM.  It's only my private home server and performance is
still sufficient but I hope to reduce power consumption considerably.

> > the storage setup is as follows:
> > Two identical SATA disks with 1 partition on each drive spanning the
> > whole drive, i.e. /dev/sda1 and /dev/sdb1.  Then, /dev/sda1 and
> > /dev/sdb1 form a RAID-1 /dev/md0 with LVM on top of it.
> 
> 
> ext4?  That lacks integrity checking.
> 
> 
> btrfs?  That has integrity checking, but requires periodic balancing.

Mostly ext4 for / /var /var/spool/news /usr /usr/local and /home file
systems.  The /usr/src file system is btrfs and some test file systems
also.  There are also 4 VMs, FreeBSD and NetBSD with their partitions
and slices and ufs file systems, one Linux VM with ext4 and one very
old Linux VM (kernel 2.4) with its own LVM in two LVs and 10 ext3 file
systems.

> Are both your operating system and your data on this array?  I always
> use a single, small solid-state device for the system drive, configure
> my hardware so that it is /dev/sda, and use separate drive(s) for data
> (/dev/sdb, /dev/sdc, etc.).  Separating these concerns simplifies
> system administration and disaster preparedness/ recovery.

Yes, everything is in the LVs on /dev/md0.  Except for some external
USB hard drives for backup (4 TB) and some other seldomly used stuff
(e.g. NTFS drive with some old data of my wife's laptop, I cannot
persuade her to use Linux).

> > but I found the following with
> > smartctl:
> > --
> > # diff -U20 <(smartctl -x /dev/sda) <(smartctl -x /dev/sdb)
> 
> 
> Why limit unified context to 20 lines?  You may be missing information
> (I have not counted the differences, below).  I suggest '-U' alone.

20 lines are just enough to get all.  You can see this because there
are less than 20 context lines at the beginning and end of the diff
and only one hunk.  GNU diff doesn't allow -U without a line count.

> You have a SATA transfer speed mismatch -- 6.0 Gbps drives running at
> 3.0 Gbps.  If your ports are 3 Gbps, fine.  If your ports are 6 Gbps,
> you have bad ports, cables, racks, docks, trays, etc..

Yes, the old Gigabyte mainboard has only 3 Gbps ports.  I wasn't aware
of this but have just looked up the specs.

> Seek_Error_Rate indicates those drives have seen better days, but are
> doing their job.
> 
> 
> Power_On_Hours indicates those drives have seen lots of use.

> Power_Cycle_Count indicates that the machine runs 24x7 for long
> periods without rebooting.

Yes, the server runs 24/7 except for kernel updates, and a power
outage 2 weeks ago (my UPS batteries also need replacement... )-:

And the server is about 8 years old, initially with only 1 hard drive
which crashed while my backup was too small to hold everything.  This
meant a lot of work (and quite some money) to get everything running
again and to recover data which wasn't in the backup.

This was almost 6 years ago and I then bought 2 Seagate Barracuda
drives for RAID-1 and a larger backup drive.  One of the two Seagate
drives is still running and is /dev/sda.  The other drive /dev/sdb
crashed after only 9.5 months of operation and I got it replaced by
the dealer.  This was when I loved my decision to setup RAID-1.  With
no downtime I pulled the failed drive, returned it to the dealer, ran
the system a week or two with only one drive, got the replacement
drive from the dealer hot-plugged it in, synced, and was happy :-)
Only short time after this I also bought a 3.5" removable mounting
frame for 2 drives to swap drives even more easily.

> Runtime_Bad_Block looks acceptable.

> End-to-End_Error and Reported_Uncorrect look perfect.  The drives
> should not have corrupted or lost any data (other hardware and/or
> events may have).

OK.

> Airflow_Temperature_Cel and Temperature_Celsius are higher than I
> like. I suggest that you dress cables, add fans, etc., to improve
> cooling.

OK, I'll have a look at that.

> UDMA_CRC_Error_Count for /dev/sda looks worrisome, both compared to
> /dev/sdb and compared to reports for my drives.
> 
> 
> Total_LBAs_Written for /dev/sda is almost double that of
> /dev/sdb. Where those drives both new when put into RAID1?

Yes, see above.  But /dev/sdb was replaced after 9.5 months, so it has
shorter life-time.  Also, /dev/sda began to fail every couple of
months about a year ago.  I could always fix this by pulling the
drive, re-inserting and re-syncing it.  This also caused more
write-traffic to /dev/sda.

>

Re: RAID-1 and disk I/O

2021-07-18 Thread David Christensen

On 7/18/21 2:16 AM, Reco wrote:

Hi.

On Sat, Jul 17, 2021 at 02:03:15PM -0700, David Christensen wrote:

But much more noticable is the difference of data reads of the two
disks, i.e. 55 GB and 27 GB, i.e. roughly twice as much data is read
from /dev/sdb compared to /dev/sda.  Trying to figure out the reason
for this, dmesg didn't give me anything


Getting meaningful information from system monitoring tools is
non-trivial.  Perhaps 'iostat 600' concurrent with a run of bonnie++.
Or, 'iostat 3600 24' during normal operations.  Or, 'iostat' dumped to
a time-stamped output file run once an hour by a cron job.


iostat belongs to sysstat package.
sysstat provides sar, which, by default, gathers every detail of the
host resource utilization and a little more once per 10 minutes.

There's little need for the kludges you're describing for one can simply
invoke "sar -pd -f /var/log/sysstat/sa...".

Reco



Yes, sar(1) looks useful.  :-)


David



Re: RAID-1 and disk I/O

2021-07-18 Thread mick crane

On 2021-07-18 14:37, David wrote:

On Sun, 18 Jul 2021 at 21:08,  wrote:

On Saturday, July 17, 2021 09:30:56 PM David wrote:



> The 'smartctl' manpage explains how to run and abort self-tests.
> It also says that a running test can degrade the performance of the drive.


Interesting -- not surprising, makes sense, but something (for me, at 
least)
to keep in mind -- probably not a good idea to run on an old drive 
that hasn't

been backed up.


Sorry if my language was unclear. If you read the manpage context, it's
explaining that drives can be tested without taking them out of 
service.

So performance is only "degraded" while the test is running, compared
to normal operation, because the drive is also busy testing itself.
It doesn't mean permanent degradation.


I admit I had to look twice "running a test". "What!". Oh "a running 
test"


mick
--
Key ID4BFEBB31



Re: RAID-1 and disk I/O

2021-07-18 Thread David
On Sun, 18 Jul 2021 at 21:08,  wrote:
> On Saturday, July 17, 2021 09:30:56 PM David wrote:

> > The 'smartctl' manpage explains how to run and abort self-tests.
> > It also says that a running test can degrade the performance of the drive.

> Interesting -- not surprising, makes sense, but something (for me, at least)
> to keep in mind -- probably not a good idea to run on an old drive that hasn't
> been backed up.

Sorry if my language was unclear. If you read the manpage context, it's
explaining that drives can be tested without taking them out of service.
So performance is only "degraded" while the test is running, compared
to normal operation, because the drive is also busy testing itself.
It doesn't mean permanent degradation.



Re: RAID-1 and disk I/O

2021-07-18 Thread David Christensen

On 7/17/21 6:30 PM, David wrote:

On Sun, 18 Jul 2021 at 07:03, David Christensen
 wrote:

On 7/17/21 5:34 AM, Urs Thuermann wrote:



On my server running Debian stretch,
the storage setup is as follows:
Two identical SATA disks with 1 partition on each drive spanning the
whole drive, i.e. /dev/sda1 and /dev/sdb1.  Then, /dev/sda1 and
/dev/sdb1 form a RAID-1 /dev/md0 with LVM on top of it.



--
# diff -U20 <(smartctl -x /dev/sda) <(smartctl -x /dev/sdb)



-  9 Power_On_Hours  -O--CK   042   042   000-51289
+  9 Power_On_Hours  -O--CK   051   051   000-43740



   SMART Extended Self-test Log Version: 1 (1 sectors)
   Num  Test_DescriptionStatus  Remaining  LifeTime(hours)  
LBA_of_first_error
-# 1  Short offline   Completed without error   00% 21808 -
+# 1  Short offline   Completed without error   00% 14254 -


sda was last self-tested at 21808 hours and is now at 51289.
sdb was last self-tested at 14254 hours and is now at 43740.
And those were short (a couple of minutes) self-tests only.
So these drives have apparently only ever run one short self-test.


Thank you for the clarification.  :-)


David



Re: RAID-1 and disk I/O

2021-07-18 Thread rhkramer
On Saturday, July 17, 2021 09:30:56 PM David wrote:
> The 'smartctl' manpage explains how to run and abort self-tests.
> It also says that a running test can degrade the performance of the drive.

Interesting -- not surprising, makes sense, but something (for me, at least) 
to keep in mind -- probably not a good idea to run on an old drive that hasn't 
been backed up.



Re: RAID-1 and disk I/O

2021-07-18 Thread Reco
Hi.

On Sat, Jul 17, 2021 at 02:03:15PM -0700, David Christensen wrote:
> > But much more noticable is the difference of data reads of the two
> > disks, i.e. 55 GB and 27 GB, i.e. roughly twice as much data is read
> > from /dev/sdb compared to /dev/sda.  Trying to figure out the reason
> > for this, dmesg didn't give me anything
> 
> Getting meaningful information from system monitoring tools is
> non-trivial.  Perhaps 'iostat 600' concurrent with a run of bonnie++.
> Or, 'iostat 3600 24' during normal operations.  Or, 'iostat' dumped to
> a time-stamped output file run once an hour by a cron job.

iostat belongs to sysstat package.
sysstat provides sar, which, by default, gathers every detail of the
host resource utilization and a little more once per 10 minutes.

There's little need for the kludges you're describing for one can simply
invoke "sar -pd -f /var/log/sysstat/sa...".

Reco



Re: RAID-1 and disk I/O

2021-07-17 Thread David
On Sun, 18 Jul 2021 at 07:03, David Christensen
 wrote:
> On 7/17/21 5:34 AM, Urs Thuermann wrote:

> > On my server running Debian stretch,
> > the storage setup is as follows:
> > Two identical SATA disks with 1 partition on each drive spanning the
> > whole drive, i.e. /dev/sda1 and /dev/sdb1.  Then, /dev/sda1 and
> > /dev/sdb1 form a RAID-1 /dev/md0 with LVM on top of it.

> > --
> > # diff -U20 <(smartctl -x /dev/sda) <(smartctl -x /dev/sdb)

> > -  9 Power_On_Hours  -O--CK   042   042   000-51289
> > +  9 Power_On_Hours  -O--CK   051   051   000-43740

> >   SMART Extended Self-test Log Version: 1 (1 sectors)
> >   Num  Test_DescriptionStatus  Remaining  
> > LifeTime(hours)  LBA_of_first_error
> > -# 1  Short offline   Completed without error   00% 21808   
> >   -
> > +# 1  Short offline   Completed without error   00% 14254   
> >   -

sda was last self-tested at 21808 hours and is now at 51289.
sdb was last self-tested at 14254 hours and is now at 43740.
And those were short (a couple of minutes) self-tests only.
So these drives have apparently only ever run one short self-test.

I am a home user, and I run long self-tests regularly using
# smartctl -t long 
In my opinion these drives are due for a long self-test.
I have no idea if this will add any useful information,
but there's an obvious way to find out :)

A bit more info on self-tests:
https://serverfault.com/questions/732423/what-does-smart-testing-do-and-how-does-it-work

The 'smartctl' manpage explains how to run and abort self-tests.
It also says that a running test can degrade the performance of the drive.



Re: RAID-1 and disk I/O

2021-07-17 Thread David Christensen

On 7/17/21 5:34 AM, Urs Thuermann wrote:
On my server running Debian stretch, 



You should consider upgrading to Debian 10 -- more people run that and 
you will get better support.



I migrated to FreeBSD.



the storage setup is as follows:
Two identical SATA disks with 1 partition on each drive spanning the
whole drive, i.e. /dev/sda1 and /dev/sdb1.  Then, /dev/sda1 and
/dev/sdb1 form a RAID-1 /dev/md0 with LVM on top of it.



ext4?  That lacks integrity checking.


btrfs?  That has integrity checking, but requires periodic balancing.


I use ZFS.  That has integrity checking.  It is wise to do periodic 
scrubs to check for problems.



Are both your operating system and your data on this array?  I always 
use a single, small solid-state device for the system drive, configure 
my hardware so that it is /dev/sda, and use separate drive(s) for data 
(/dev/sdb, /dev/sdc, etc.).  Separating these concerns simplifies system 
administration and disaster preparedness/ recovery.




The disk I/O shows very different usage of the two SATA disks:

 # iostat | grep -E '^[amDL ]|^sd[ab]'
 Linux 5.13.1 (bit)  07/17/21_x86_64_(2 CPU)
 avg-cpu:  %user   %nice %system %iowait  %steal   %idle
3.780.002.270.860.00   93.10
 Device:tpskB_read/skB_wrtn/skB_readkB_wrtn
 sdb   4.5472.1661.25   54869901   46577068
 sda   3.7235.5361.25   27014254   46577068
 md0   5.53   107.1957.37   81504323   43624519
 
The data written to the SATA disks is about 7% = (47 GB - 44 GB) / 44 GB

more than to the RAID device /dev/md0.  Is that the expected overhead
for RAID-1 meta data?

But much more noticable is the difference of data reads of the two
disks, i.e. 55 GB and 27 GB, i.e. roughly twice as much data is read
from /dev/sdb compared to /dev/sda.  Trying to figure out the reason
for this, dmesg didn't give me anything 



Getting meaningful information from system monitoring tools is 
non-trivial.  Perhaps 'iostat 600' concurrent with a run of bonnie++. 
Or, 'iostat 3600 24' during normal operations.  Or, 'iostat' dumped to a 
time-stamped output file run once an hour by a cron job.  Beware of 
using multiple system monitoring tools at the same time -- they may 
access the same kernel data structures and step on each other.




but I found the following with
smartctl:

--
# diff -U20 <(smartctl -x /dev/sda) <(smartctl -x /dev/sdb)



Why limit unified context to 20 lines?  You may be missing information 
(I have not counted the differences, below).  I suggest '-U' alone.




--- /dev/fd/63  2021-07-17 12:09:00.425352672 +0200
+++ /dev/fd/62  2021-07-17 12:09:00.425352672 +0200
@@ -1,165 +1,164 @@
  smartctl 6.6 2016-05-31 r4324 [x86_64-linux-5.13.1] (local build)
  Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
  
  === START OF INFORMATION SECTION ===

  Model Family: Seagate Barracuda 7200.14 (AF)



I burned up both old desktop drives and new enterprise drives when I put 
them into a server (Samba, CVS) for my SOHO network and ran them 24x7. 
As my arrays had only one redundant drive (e.g. two drives in RAID1, 
three drives in RAID5), I had the terrorifying realization that I was at 
risk of losing everything when a drive failed and I had not replaced it 
yet.  I upgraded to all enterprise drives, bought a spare enterprise 
drive and put it on the shelf, built another server, replicate 
periodically to the second server, and replicate periodically to 
tray-mounted old desktop drives used like backup tapes (and rotated 
on/off site).  I should probably put the spare drive into the live 
server and set it up as a hot spare.




  Device Model: ST2000DM001-1ER164
-Serial Number:W4Z171HL
-LU WWN Device Id: 5 000c50 07d3ebd67
+Serial Number:Z4Z2M4T1
+LU WWN Device Id: 5 000c50 07b21e7db
  Firmware Version: CC25
  User Capacity:2,000,397,852,160 bytes [2.00 TB]
  Sector Sizes: 512 bytes logical, 4096 bytes physical
  Rotation Rate:7200 rpm
  Form Factor:  3.5 inches
  Device is:In smartctl database [for details use: -P show]
  ATA Version is:   ACS-2, ACS-3 T13/2161-D revision 3b
  SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 3.0 Gb/s)



You have a SATA transfer speed mismatch -- 6.0 Gbps drives running at 
3.0 Gbps.  If your ports are 3 Gbps, fine.  If your ports are 6 Gbps, 
you have bad ports, cables, racks, docks, trays, etc..




  Local Time is:Sat Jul 17 12:09:00 2021 CEST
  SMART support is: Available - device has SMART capability.
  SMART support is: Enabled
  AAM feature is:   Unavailable
  APM level is: 254 (maximum performance)
  Rd look-ahead is: Enabled
  Write cache is:   Enabled
  ATA Security is:  Disabled, NOT FROZEN [SEC1]
  Wt Cache Reorder: Unavailable
  
  ===

Re: RAID-1 and disk I/O

2021-07-17 Thread Andy Smith
Hi Urs,

Your plan to change the SATA cable seems wise - your various error
rates are higher than I have normally seen.

Also worth bearing in mind that Linux MD RAID 1 will satisfy all
read IO for a given operation from one device in the mirror. If
you have processes that do occasional big reads then by chance those
can end up being served by the same device leading to a big
disparity in per-device LBAs read.

You can do RAID-10 (even on 2 or 3 devices) which will stripe data
at the chunk size resulting in even a single read operation being
striped across multiple devices, though overall this may not be more
performant than RAID-1, especially if your devices were
non-rotational. You would have to measure.

I don't know about the write overhead you are seeing.

Cheers,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: RAID-1 and disk I/O

2021-07-17 Thread Bob Weber

On 7/17/21 08:34, Urs Thuermann wrote:

Here, the noticable lines are IMHO

 Raw_Read_Error_Rate (208245592 vs. 117642848)
 Command_Timeout (8 14 17 vs. 0 0 0)
 UDMA_CRC_Error_Count(11058 vs. 29)

Do these numbers indicate a serious problem with my /dev/sda drive?
And is it a disk problem or a transmission problem?
UDMA_CRC_Error_Count sounds like a cable problem for me, right?

BTW, for a year so I had problems with /dev/sda every couple of month,
where the kernel set the drive status in the RAID array to failed.  I
could always fix the problem by hot-plugging out the drive, wiggling
the SATA cable, re-inserting and re-adding the drive (without any
impact on the running server).  Now, I haven't seen the problem for
quite a while.  My suspect is that the cable is still not working very
good, but failures are not often enough to set the drive to "failed"
status.

urs

I switched from Seagate to WD Red years ago since I couldn't get them to last 
more than a year or so.  I have one WD that is 6.87 years old with no errors.  
Well past the 5 year life expectancy. In recent years WD has pulled a marketing 
controversy on their Red drives.  See:


https://arstechnica.com/gadgets/2020/06/western-digital-adds-red-plus-branding-for-non-smr-hard-drives/

So be careful to get the Pro version if you decide to try WD. I use the 
WD4003FFBX (4T) drives (Raid 1) and have them at 2.8 years running 24/7 with no 
problems.


If you value your data get another drive NOW .. they are already 5 and 5.8 years 
old!  Add it to the array and let it settle in (sync) and see what happens.  I 
hope your existing array can hold together long enough to add a 3rd drive.  I 
would have replaced those drives long ago from all the errors reported.  You 
might want to get new cables also since you have had problems in the past.


I also run self tests weekly to make sure the drives are ok.  I run smartctl -a 
daily also.  I also run backuppc on a separate server to get backups of 
important data.


There are some programs in /usr/share/mdadm that can check an array but I would 
wait until you have a new drive added to the array before testing the array.  
Here is the warning that comes with another script I found:




DATA LOSS MAY HAVE OCCURRED.

This condition may have been caused by one of more of the following events:

. A LEGITIMATE write to a memory mapped file or swap partition backed by a
    RAID1 (and only a RAID1) device - see the md(4) man page for details.

. A power failure when the array was being written-to.
  Data corruption by a hard disk drive, drive controller, cable etc.

. A kernel bug in the md or storage subsystems etc.

. An array being forcibly created in an inconsistent state using --assume-clean

This count is updated when the md subsystem carries out a 'check' or
'repair' action.  In the case of 'repair' it reflects the number of
mismatched blocks prior to carrying out the repair.

Once you have fixed the error, carry out a 'check' action to reset the count
to zero.

See the md (section 4) manual page, and the following URL for details:

https://raid.wiki.kernel.org/index.php/Linux_Raid#Frequently_Asked_Questions_-_FAQ

--

The problem is that if a miss count occurs then which drive (Raid 1) is 
correct!  I also run programs like debsums to check programs after an update so 
I know there is no bit rot in important programs as explained above.


Hope this helps.

--



*...Bob*

Re: RAID-1 and disk I/O

2021-07-17 Thread Nicholas Geovanis
I'm going to echo your final thought there: Replace the SATA cables with 2
NEW ones of the same model. Then see how it goes, meaning rerun the tests
you just ran. If possible, try to make the geometries of the cables as
similar as you can: roughly same (short?) lengths, roughly as straight and
congruent as you are able.

Keep in mind that the minor flaws on the drive surfaces are different, each
drive from the other. The list of known bad blocks will be different from
one drive to the other and that can affect performance of the filesystem
built on it.

On Sat, Jul 17, 2021, 7:42 AM Urs Thuermann  wrote:

> On my server running Debian stretch, the storage setup is as follows:
> Two identical SATA disks with 1 partition on each drive spanning the
> whole drive, i.e. /dev/sda1 and /dev/sdb1.  Then, /dev/sda1 and
> /dev/sdb1 form a RAID-1 /dev/md0 with LVM on top of it.
>
> The disk I/O shows very different usage of the two SATA disks:
>
> # iostat | grep -E '^[amDL ]|^sd[ab]'
> Linux 5.13.1 (bit)  07/17/21_x86_64_(2 CPU)
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>3.780.002.270.860.00   93.10
> Device:tpskB_read/skB_wrtn/skB_readkB_wrtn
> sdb   4.5472.1661.25   54869901   46577068
> sda   3.7235.5361.25   27014254   46577068
> md0   5.53   107.1957.37   81504323   43624519
>
> The data written to the SATA disks is about 7% = (47 GB - 44 GB) / 44 GB
> more than to the RAID device /dev/md0.  Is that the expected overhead
> for RAID-1 meta data?
>
> But much more noticable is the difference of data reads of the two
> disks, i.e. 55 GB and 27 GB, i.e. roughly twice as much data is read
> from /dev/sdb compared to /dev/sda.  Trying to figure out the reason
> for this, dmesg didn't give me anything but I found the following with
> smartctl:
>
>
> --
> # diff -U20 <(smartctl -x /dev/sda) <(smartctl -x /dev/sdb)
> --- /dev/fd/63  2021-07-17 12:09:00.425352672 +0200
> +++ /dev/fd/62  2021-07-17 12:09:00.425352672 +0200
> @@ -1,165 +1,164 @@
>  smartctl 6.6 2016-05-31 r4324 [x86_64-linux-5.13.1] (local build)
>  Copyright (C) 2002-16, Bruce Allen, Christian Franke,
> www.smartmontools.org
>
>  === START OF INFORMATION SECTION ===
>  Model Family: Seagate Barracuda 7200.14 (AF)
>  Device Model: ST2000DM001-1ER164
> -Serial Number:W4Z171HL
> -LU WWN Device Id: 5 000c50 07d3ebd67
> +Serial Number:Z4Z2M4T1
> +LU WWN Device Id: 5 000c50 07b21e7db
>  Firmware Version: CC25
>  User Capacity:2,000,397,852,160 bytes [2.00 TB]
>  Sector Sizes: 512 bytes logical, 4096 bytes physical
>  Rotation Rate:7200 rpm
>  Form Factor:  3.5 inches
>  Device is:In smartctl database [for details use: -P show]
>  ATA Version is:   ACS-2, ACS-3 T13/2161-D revision 3b
>  SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 3.0 Gb/s)
>  Local Time is:Sat Jul 17 12:09:00 2021 CEST
>  SMART support is: Available - device has SMART capability.
>  SMART support is: Enabled
>  AAM feature is:   Unavailable
>  APM level is: 254 (maximum performance)
>  Rd look-ahead is: Enabled
>  Write cache is:   Enabled
>  ATA Security is:  Disabled, NOT FROZEN [SEC1]
>  Wt Cache Reorder: Unavailable
>
>  === START OF READ SMART DATA SECTION ===
>  SMART overall-health self-assessment test result: PASSED
>
>  General SMART Values:
>  Offline data collection status:  (0x82)Offline data collection
> activity
> was completed without error.
> Auto Offline Data Collection:
> Enabled.
>  Self-test execution status:  (   0)The previous self-test
> routine completed
> without error or no self-test has
> ever
> been run.
>  Total time to complete Offline
> -data collection:   (   89) seconds.
> +data collection:   (   80) seconds.
>  Offline data collection
>  capabilities:   (0x7b) SMART execute Offline immediate.
> Auto Offline data collection
> on/off support.
> Suspend Offline collection upon new
> command.
> Offline surface scan supported.
> Self-test supported.
> Conveyance Self-test supported.
>  

RAID-1 and disk I/O

2021-07-17 Thread Urs Thuermann
On my server running Debian stretch, the storage setup is as follows:
Two identical SATA disks with 1 partition on each drive spanning the
whole drive, i.e. /dev/sda1 and /dev/sdb1.  Then, /dev/sda1 and
/dev/sdb1 form a RAID-1 /dev/md0 with LVM on top of it.

The disk I/O shows very different usage of the two SATA disks:

# iostat | grep -E '^[amDL ]|^sd[ab]'
Linux 5.13.1 (bit)  07/17/21_x86_64_(2 CPU)
avg-cpu:  %user   %nice %system %iowait  %steal   %idle
   3.780.002.270.860.00   93.10
Device:tpskB_read/skB_wrtn/skB_readkB_wrtn
sdb   4.5472.1661.25   54869901   46577068
sda   3.7235.5361.25   27014254   46577068
md0   5.53   107.1957.37   81504323   43624519

The data written to the SATA disks is about 7% = (47 GB - 44 GB) / 44 GB
more than to the RAID device /dev/md0.  Is that the expected overhead
for RAID-1 meta data?

But much more noticable is the difference of data reads of the two
disks, i.e. 55 GB and 27 GB, i.e. roughly twice as much data is read
from /dev/sdb compared to /dev/sda.  Trying to figure out the reason
for this, dmesg didn't give me anything but I found the following with
smartctl:

--
# diff -U20 <(smartctl -x /dev/sda) <(smartctl -x /dev/sdb)
--- /dev/fd/63  2021-07-17 12:09:00.425352672 +0200
+++ /dev/fd/62  2021-07-17 12:09:00.425352672 +0200
@@ -1,165 +1,164 @@
 smartctl 6.6 2016-05-31 r4324 [x86_64-linux-5.13.1] (local build)
 Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org
 
 === START OF INFORMATION SECTION ===
 Model Family: Seagate Barracuda 7200.14 (AF)
 Device Model: ST2000DM001-1ER164
-Serial Number:W4Z171HL
-LU WWN Device Id: 5 000c50 07d3ebd67
+Serial Number:Z4Z2M4T1
+LU WWN Device Id: 5 000c50 07b21e7db
 Firmware Version: CC25
 User Capacity:2,000,397,852,160 bytes [2.00 TB]
 Sector Sizes: 512 bytes logical, 4096 bytes physical
 Rotation Rate:7200 rpm
 Form Factor:  3.5 inches
 Device is:In smartctl database [for details use: -P show]
 ATA Version is:   ACS-2, ACS-3 T13/2161-D revision 3b
 SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 3.0 Gb/s)
 Local Time is:Sat Jul 17 12:09:00 2021 CEST
 SMART support is: Available - device has SMART capability.
 SMART support is: Enabled
 AAM feature is:   Unavailable
 APM level is: 254 (maximum performance)
 Rd look-ahead is: Enabled
 Write cache is:   Enabled
 ATA Security is:  Disabled, NOT FROZEN [SEC1]
 Wt Cache Reorder: Unavailable
 
 === START OF READ SMART DATA SECTION ===
 SMART overall-health self-assessment test result: PASSED
 
 General SMART Values:
 Offline data collection status:  (0x82)Offline data collection activity
was completed without error.
Auto Offline Data Collection: Enabled.
 Self-test execution status:  (   0)The previous self-test routine 
completed
without error or no self-test has ever 
been run.
 Total time to complete Offline 
-data collection:   (   89) seconds.
+data collection:   (   80) seconds.
 Offline data collection
 capabilities:   (0x7b) SMART execute Offline immediate.
Auto Offline data collection on/off 
support.
Suspend Offline collection upon new
command.
Offline surface scan supported.
Self-test supported.
Conveyance Self-test supported.
Selective Self-test supported.
 SMART capabilities:(0x0003)Saves SMART data before entering
power-saving mode.
Supports SMART auto save timer.
 Error logging capability:(0x01)Error logging supported.
General Purpose Logging supported.
 Short self-test routine 
 recommended polling time:   (   1) minutes.
 Extended self-test routine
-recommended polling time:   ( 213) minutes.
+recommended polling time:   ( 211) minutes.
 Conveyance self-test routine
 recommended polling time:   (   2) minutes.
 SCT capabilities: (0x1085) SCT Status supported.
 
 SMART Attributes Data Structure revision number: 10
 Vendor Specific SMART Attributes with Thresholds:
 ID# ATTRIBUTE_NAME  FLAGSVALUE WORST THRESH FAIL RAW_VALUE
-  1 Raw_Read_Error_Rate POSR--   119   099   006-208245592
-  3 Spin_Up_TimePO   097   096

Re: Raid 1

2021-01-25 Thread David Christensen

On 2021-01-24 21:23, mick crane wrote:

On 2021-01-24 20:10, David Christensen wrote:



Please tell us why you must put the OS and the backup images on the
same RAID mirror of two HDD's, and why you cannot add one (or two?)
more devices for the OS.


I think I'll go with the first and last suggestion to just have 2 disks 
in raid1.
It seems that properly you'd want 2 disks in raid for the OS, 2 at least 
for the pool and maybe 1 for the cache.

Don't have anything big enough I could put 5 disks in.
I could probably get 3 disks in. Install the OS on one and then dd that 
to another and put that in a drawer and have another 2 disks as the zfs 
pool. I might have a fiddle about and see what goes on.



If you are short on hardware or money, one option is to install Debian 
onto a USB flash drive.   I ran desktop hardware as servers on USB flash 
drives for many years, and still keep a Debian 9 system on USB flash for 
maintenance purposes.  I have yet to wear one out.  If you feel the need 
for RAID, use two USB flash drives.



David



Re: Raid 1

2021-01-25 Thread Pankaj Jangid


Thanks Andy and Linux-Fan, for the detailed reply.



Re: Raid 1

2021-01-25 Thread Linux-Fan

Andy Smith writes:


Hi Pankaj,

Not wishing to put words in Linux-Fan's mouth, but my own views
are…

On Mon, Jan 25, 2021 at 11:04:09AM +0530, Pankaj Jangid wrote:
> Linux-Fan  writes:
>
> > * OS data bitrot is not covered, but OS single HDD failure is.
> >   I achieve this by having OS and Swap on MDADM RAID 1
> >   i.e. mirrored but without ZFS.
>
> I am still learning.
>
> 1. By "by having OS and Swap on MDADM", did you mean the /boot partition
>and swap.

When people say, "I put OS and Swap on MDADM" they typically mean
the entire installed system before user/service data is put on it.
So that's / and all its usual sub-directories, and swap, possibly
with things later split off after install.


Yes, that is exactly how I meant it :)

My current setup has two disks each partitioned as follows:

* first   partition ESP  for /boot/efi (does not support RAID)
* sencond partition MDADM RAID 1 for / (including /boot and /home)
* third   partition MDADM RAID 1 for swap
* fourth  partition ZFS mirror   for virtual machines and containers

Some may like to have /home separately. I personally prefer to store all my  
user-created data outside of the /home tree because many programs are using  
/home structures for cache and configuration files that are automatically  
generated and should (IMHO) not be mixed with what I consider important data.



> 2. Why did you put Swap on RAID? What is the advantage?

If you have swap used, and the device behind it goes away, your
system will likely crash.

The point of RAID is to increase availability. If you have the OS
itself in RAID and you have swap, the swap should be in RAID too.


That was exactly my reasoning, too. I can add that I did not use a ZFS  
volume for the swap mostly because of

https://github.com/openzfs/zfs/issues/7734
and I did not use it for the OS (/, /boot, /home) mainly because I wanted to  
avoid getting a non-booting system in case anything fails with the ZFS  
module DKMS build. The added benefit was a less complex installation  
procedure i.e. using Debian installer was possible and all ZFS stuff could  
be done from the installed and running system.


I would advise against replicating my setup for first-time RAID users  
because restore after a failed disk will require invoking the respective  
restoration procedures of both technologies.



There are use cases where the software itself provides the
availability. For example, there is Ceph, which typically uses
simple block devices from multiple hosts and distributes the data
around.


Yes.

[...]


> How do you decide which partition to cover and which not?

For each of the storage devices in your system, ask yourself:

- Would your system still run if that device suddenly went away?

- Would your application(s) still run if that device suddenly went
  away?

- Could finding a replacement device and restoring your data from
  backups be done in a time span that you consider reasonable?

If the answer to those questions are not what you could tolerate,
add some redundancy in order to reduce unavailability. If you decide
you can tolerate the possible unavailability then so be it.


[...]

My rule of thumb: RAID 1 whenever possible i.e. on all actively relied-upon  
computers that are not laptops or other special form factors with tightly  
limited HDD/SSD options.


The replacement drive considerations are important for RAID setups, too. I  
used to have a "cold spare" HDD but given the rate at which the  
capacity/price ratio rises I thought it to be overly cautious/expensive to  
keep that scheme.


HTH
Linux-Fan

öö


pgp9PMRNZZV5y.pgp
Description: PGP signature


Re: Raid 1

2021-01-25 Thread deloptes
mick crane wrote:

> I think I'll go with the first and last suggestion to just have 2 disks
> in raid1.
> It seems that properly you'd want 2 disks in raid for the OS, 2 at least
> for the pool and maybe 1 for the cache.
> Don't have anything big enough I could put 5 disks in.
> I could probably get 3 disks in. Install the OS on one and then dd that
> to another and put that in a drawer and have another 2 disks as the zfs
> pool. I might have a fiddle about and see what goes on.

Hi,
I have not followed this thread closely, but my advise is keep it as simple
as possible.
Very often people here overcomplicate things - geeks and freaks - in the
good sense - but still if you do not know ZFS or can not afford the
infrastructure for that, just leave it.

In my usecase I came with following solution:

md0 - boot disk (ext3)
md1 - root disk (ext4)
md2 - swap
md3 - LVM for user data (encrypted + xfs)

I have this on two disks that were replaced and "grown" from 200GB to 1TB
over the past 18y. Some of the Seagates I used in the beginning died and
RAID1 payed off.

Planning to move to GPT next md0 will be converted to EFI disk (FAT32) or I
will just create one additional partition on each disk for the EFI stuff.
I'm not sure if I need it at all, so I must be really bored to touch this.








Re: Raid 1

2021-01-24 Thread Andy Smith
Hi Pankaj,

Not wishing to put words in Linux-Fan's mouth, but my own views
are…

On Mon, Jan 25, 2021 at 11:04:09AM +0530, Pankaj Jangid wrote:
> Linux-Fan  writes:
> 
> > * OS data bitrot is not covered, but OS single HDD failure is.
> >   I achieve this by having OS and Swap on MDADM RAID 1
> >   i.e. mirrored but without ZFS.
> 
> I am still learning.
> 
> 1. By "by having OS and Swap on MDADM", did you mean the /boot partition
>and swap.

When people say, "I put OS and Swap on MDADM" they typically mean
the entire installed system before user/service data is put on it.
So that's / and all its usual sub-directories, and swap, possibly
with things later split off after install.

> 2. Why did you put Swap on RAID? What is the advantage?

If you have swap used, and the device behind it goes away, your
system will likely crash.

The point of RAID is to increase availability. If you have the OS
itself in RAID and you have swap, the swap should be in RAID too.

There are use cases where the software itself provides the
availability. For example, there is Ceph, which typically uses
simple block devices from multiple hosts and distributes the data
around.

A valid setup for Ceph is to have the OS in a small RAID just so
that a device failure doesn't take down a machine entirely, but then
have the data devices stand alone as Ceph itself will handle a
failure of those. Small boot+OS devices are cheap and it's so simple
to RAID them.

Normally Ceph is set up so that an entire host can be lost. If host
reinstallation is automatic and quick and there's so many hosts that
losing any one of them is a fairly minor occurrence then it could be
valid to not even put the OS+swap in RAID. Though for me it still
sounds like a lot more hassle than just replacing a dead drive in a
running machine, so I wouldn't do it personally.

>- I understood that RAID is used to detect disk failures early.

Not really. Although with RAID or ZFS or the like it is typical to
have a periodic (weekly, monthly, etc) scrub that reads all data and
may uncover drive problems like unreadable sectors, usually failures
happen when they will happen. The difference is that a copy of the
data still exists somewhere else, so that can be used and the
failure does not have to propagate to the application.

> How do you decide which partition to cover and which not?

For each of the storage devices in your system, ask yourself:

- Would your system still run if that device suddenly went away?

- Would your application(s) still run if that device suddenly went
  away?

- Could finding a replacement device and restoring your data from
  backups be done in a time span that you consider reasonable?

If the answer to those questions are not what you could tolerate,
add some redundancy in order to reduce unavailability. If you decide
you can tolerate the possible unavailability then so be it.

Cheers,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: Raid 1

2021-01-24 Thread Andrei POPESCU
On Du, 24 ian 21, 23:21:38, Linux-Fan wrote:
> mick crane writes:
> 
> > On 2021-01-24 17:37, Andrei POPESCU wrote:
> 
> [...]
> 
> > > If you want to combine Linux RAID and ZFS on just two drives you could
> > > partition the drives (e.g. two partitions on each drive), use the first
> > > partition on each drive for Linux RAID, install Debian (others will have
> > > to confirm whether the installer supports creating RAID from partitions)
> > > and then use the other partitions for the ZFS pool.
> 
> I can confirm that this works. In fact, I always thought that to be the
> "best practice" for MDADM: To use individual partitions rather than whole
> devices. OTOH for ZFS, best practice seems to be to use entire devices. I am
> not an expert on this, though :)

ZFS is actually using GPT partitions and also automatically creates a 
"reserve" 8 MiB partition, just in case a replacement disk is not 
exactly the same size as the other disk(s) in a VDEV.

So far I haven't found a way around it (not that I care, as I prefer to 
partition manually and identify physical devices by partition label)

Kind regards,
Andrei
-- 
http://wiki.debian.org/FAQsFromDebianUser


signature.asc
Description: PGP signature


Re: Raid 1

2021-01-24 Thread Pankaj Jangid
Linux-Fan  writes:

> * OS data bitrot is not covered, but OS single HDD failure is.
>   I achieve this by having OS and Swap on MDADM RAID 1
>   i.e. mirrored but without ZFS.

I am still learning.

1. By "by having OS and Swap on MDADM", did you mean the /boot partition
   and swap.

2. Why did you put Swap on RAID? What is the advantage?

   - I understood that RAID is used to detect disk failures early. How
 do you decide which partition to cover and which not?



Re: Raid 1

2021-01-24 Thread mick crane

On 2021-01-24 20:10, David Christensen wrote:

On 2021-01-24 03:36, mick crane wrote:


Let's say I have one PC and 2 unpartitioned disks.


Please tell us why you must put the OS and the backup images on the
same RAID mirror of two HDD's, and why you cannot add one (or two?)
more devices for the OS.


David


I think I'll go with the first and last suggestion to just have 2 disks 
in raid1.
It seems that properly you'd want 2 disks in raid for the OS, 2 at least 
for the pool and maybe 1 for the cache.

Don't have anything big enough I could put 5 disks in.
I could probably get 3 disks in. Install the OS on one and then dd that 
to another and put that in a drawer and have another 2 disks as the zfs 
pool. I might have a fiddle about and see what goes on.


mick
--
Key ID4BFEBB31



Re: Raid 1

2021-01-24 Thread Linux-Fan

mick crane writes:


On 2021-01-24 17:37, Andrei POPESCU wrote:


[...]


If you want to combine Linux RAID and ZFS on just two drives you could
partition the drives (e.g. two partitions on each drive), use the first
partition on each drive for Linux RAID, install Debian (others will have
to confirm whether the installer supports creating RAID from partitions)
and then use the other partitions for the ZFS pool.


I can confirm that this works. In fact, I always thought that to be the  
"best practice" for MDADM: To use individual partitions rather than whole  
devices. OTOH for ZFS, best practice seems to be to use entire devices. I am  
not an expert on this, though :)



You might want to experiment with this in a VM first. For testing
purposes you can also experiment with ZFS on files instead of real
devices / partitions (probably with Linux RAID as well).

Kind regards,
Andrei


This is my problem "where is the OS to be running the ZFS to put Debian on ?"


You could use a live system, for instance. Beware that this route is  
complicated. I linked to the guide in a previous mail but am not sure if you  
were finally able to check it (you mentioned at least one of my links not  
being accessible, but not which one...).


All I want to do is back up PCs to another and have that have redundancy  
with 2 disks so if one gets borked I can still use the other and put things  
back together.

How do I do that ?


My recommendation would be to keep it simple stupid: Let the installer setup  
RAID 1 MDADM for OS, swap and data and be done with it, avoid ZFS unless  
there is some reason to need it :)


For sure MDADM lacks the bit rot protection, but it is easier to setup  
especially for the OS and you can mitigate the bit rot (to some extent)  
by running periodic backup integrity checks which your software hopefully  
supports.


HTH
Linux-Fan

öö

[...]


pgpsyBbo4j9TH.pgp
Description: PGP signature


Re: Raid 1

2021-01-24 Thread Andrei POPESCU
On Du, 24 ian 21, 17:50:06, Andy Smith wrote:
> 
> Once it's up and running you can then go and create a second
> partition that spans the rest of each disk, and then when you are
> ready to create your zfs pool:
> 
> > "zpool create tank mirror disk1 disk2"
> 
> # zpool create tank mirror /dev/disk/by-id/ata-DISK1MODEL-SERIAL-part2 
> /dev/disk/by-id/ata-DISK2MODEL-SERIAL-part2
> 
> The DISK1MODEL-SERIAL bits will be different for you based on what
> the model and serial numbers are of your disks. Point is it's a pair
> of devices that are partition 2 of each disk.

At this point I'd recommend to use GPT partition labels instead (not to 
be confused with file system labels). Assuming labels datapart1 and 
datapart2 the create becomes:

# zpool create tank mirror /dev/disk/by-partlabel/datapart1 
/dev/disk/by-partlabel/datapart2

Now the output of 'zpool status' and all other commands will show the 
human-friendly labels instead of the device ID.


Kind regards,
Andrei
-- 
http://wiki.debian.org/FAQsFromDebianUser


signature.asc
Description: PGP signature


Re: Raid 1

2021-01-24 Thread David Christensen

On 2021-01-24 03:36, mick crane wrote:


Let's say I have one PC and 2 unpartitioned disks.


Please tell us why you must put the OS and the backup images on the same 
RAID mirror of two HDD's, and why you cannot add one (or two?) more 
devices for the OS.



David



Re: Raid 1

2021-01-24 Thread Marc Auslander
Andy Smith  writes:
>...
>So personally I would just do the install of Debian with both disks
>inside the machine, manual partitioning, create a single partition
>big enough for your OS on the first disk and then another one the
>same on the second disk. Mark them as RAID members, set them to
>RAID-1, install on that.
>...

You don't say if this is or will become a secure boot system, which
would require an EFI partition.  Leaving a bit of space just in case
seems a good idea.



Re: Raid 1

2021-01-24 Thread mick crane

On 2021-01-24 17:37, Andrei POPESCU wrote:

On Du, 24 ian 21, 11:36:09, mick crane wrote:


I know I'm a bit thick about these things, what I'm blocked about is 
where

is the OS.
Let's say I have one PC and 2 unpartitioned disks.
Put one disk in PC and install Debian on it.


Ok


Install headers and ZFS-utils.
I put other disk in PC, PC boots from first disk.


Ok.


"zpool create tank mirror disk1 disk2"


This will destroy all data already existing on disk1 and disk2 (though 
I
strongly suspect zpool will simply refuse to use disk1). Same with 
Linux

RAID.

Creating the RAID (Linux or ZFS) will overwrite any data already
existing on the disks / partitions used for the RAID.

If you want to have the OS on RAID it's probably easiest to let the
installer configure that for you. This implies *both* disks are
available during install (unless the installer can create a "degraded"
RAID).

Installing Debian on ZFS involves manual steps anyway, so it's 
basically
create the pool with just one disk, install Debian and then 'attach' 
the

other disk to the first one.

If you want to combine Linux RAID and ZFS on just two drives you could
partition the drives (e.g. two partitions on each drive), use the first
partition on each drive for Linux RAID, install Debian (others will 
have
to confirm whether the installer supports creating RAID from 
partitions)

and then use the other partitions for the ZFS pool.

You might want to experiment with this in a VM first. For testing
purposes you can also experiment with ZFS on files instead of real
devices / partitions (probably with Linux RAID as well).

Kind regards,
Andrei


This is my problem "where is the OS to be running the ZFS to put Debian 
on ?"
All I want to do is back up PCs to another and have that have redundancy 
with 2 disks so if one gets borked I can still use the other and put 
things back together.

How do I do that ?
mick
--
Key ID4BFEBB31



Re: Raid 1

2021-01-24 Thread Andy Smith
Hi Mick,

On Sun, Jan 24, 2021 at 11:36:09AM +, mick crane wrote:
> I know I'm a bit thick about these things, what I'm blocked about is where
> is the OS.

Wherever you installed it.

> Let's say I have one PC and 2 unpartitioned disks.
> Put one disk in PC and install Debian on it.

I think you are fundamentally going about this the wrong way.

There are several concerns and I think you are mixing them up. If I
understand you correctly, you concerns are:

1. Your data and OS should be backed up.
2. Your data and OS should be available even if a disk dies

Concern #1 is totally separate from concern #2 and is achieved by
setting up a backup system, has very little to do with whether you
use RAID or ZFS or whatever. It is worth a separate thread because
it's separate project.

For concern #2, that being *availability* of data and OS, there's
many ways to do it. You seem to have settled upon ZFS for your data,
and OS separately by some other means. That's fine.

A ZFS mirror vdev is going to need two identically-sized devices.
And you want to keep your OS separate. This suggests that each of
your disks should have two partitions. The first one would be for
the OS, and the second one would be for ZFS.

If you are going to keep your OS separate, I don't see any reason
not to use mdadm RAID-1 for the OS even if you're going to use zfs
for your data. Yes you could just install the OS onto a single
partition of a single disk, but you have two disks so why not use
RAID-1? If a disk breaks, your computer carries on working, what's
not to like?

So personally I would just do the install of Debian with both disks
inside the machine, manual partitioning, create a single partition
big enough for your OS on the first disk and then another one the
same on the second disk. Mark them as RAID members, set them to
RAID-1, install on that.

Once it's up and running you can then go and create a second
partition that spans the rest of each disk, and then when you are
ready to create your zfs pool:

> "zpool create tank mirror disk1 disk2"

# zpool create tank mirror /dev/disk/by-id/ata-DISK1MODEL-SERIAL-part2 
/dev/disk/by-id/ata-DISK2MODEL-SERIAL-part2

The DISK1MODEL-SERIAL bits will be different for you based on what
the model and serial numbers are of your disks. Point is it's a pair
of devices that are partition 2 of each disk.

> Can I then remove disk1 and PC will boot Debian from disk2 ?

This is only going to work if you have gone to the effort of
installing your OS on RAID. The easiest way to achieve that is to
have both disks in the machine when you install it and to properly
tell it that the first partition of each is a RAID member, create
them as a RAID-1 and tell the installer to install onto that.

As other mentioned, after it's installed you do have to manually
install the grub bootloader to the second device as well as by
default it only gets installed on the first one.

A word of warning: RAID is quite a big topic for the uninitiated and
so is ZFS. You are proposing to take on both at once. You have some
learning to do. You may make mistakes, and this data seems precious
to you. I advise you to sort out the backups first. You might need
them sooner than you'd hoped.

Cheers,
Andy

-- 
https://bitfolk.com/ -- No-nonsense VPS hosting



Re: Raid 1

2021-01-24 Thread Andrei POPESCU
On Du, 24 ian 21, 11:36:09, mick crane wrote:
> 
> I know I'm a bit thick about these things, what I'm blocked about is where
> is the OS.
> Let's say I have one PC and 2 unpartitioned disks.
> Put one disk in PC and install Debian on it.

Ok

> Install headers and ZFS-utils.
> I put other disk in PC, PC boots from first disk.

Ok.

> "zpool create tank mirror disk1 disk2"

This will destroy all data already existing on disk1 and disk2 (though I 
strongly suspect zpool will simply refuse to use disk1). Same with Linux 
RAID.

Creating the RAID (Linux or ZFS) will overwrite any data already 
existing on the disks / partitions used for the RAID.

If you want to have the OS on RAID it's probably easiest to let the 
installer configure that for you. This implies *both* disks are 
available during install (unless the installer can create a "degraded" 
RAID).

Installing Debian on ZFS involves manual steps anyway, so it's basically 
create the pool with just one disk, install Debian and then 'attach' the 
other disk to the first one.

If you want to combine Linux RAID and ZFS on just two drives you could 
partition the drives (e.g. two partitions on each drive), use the first 
partition on each drive for Linux RAID, install Debian (others will have 
to confirm whether the installer supports creating RAID from partitions) 
and then use the other partitions for the ZFS pool.

You might want to experiment with this in a VM first. For testing 
purposes you can also experiment with ZFS on files instead of real 
devices / partitions (probably with Linux RAID as well).

Kind regards,
Andrei
-- 
http://wiki.debian.org/FAQsFromDebianUser


signature.asc
Description: PGP signature


Re: Raid 1

2021-01-24 Thread mick crane

On 2021-01-23 22:01, David Christensen wrote:

On 2021-01-23 07:01, mick crane wrote:

On 2021-01-23 12:20, Andrei POPESCU wrote:

On Vi, 22 ian 21, 22:26:46, mick crane wrote:

hello,
I want to tidy things up as suggested.
Have one old PC that I'll put 2 disks in and tidy everything up so 
what's
scattered about is on the running disks and this new/old one is just 
backup

for them.
Can I assume that Debian installer in some expert mode will sort out 
the
raid or do I need to install to one disk and then mirror it manually 
before

invoking the raid thing ?


The "raid thing" is a separate layer below the partitions and file
systems.

Technically it is possible to create the mirror with just one device 
(I

believe mdadm calls this "degraded"), partition the md mirror device,
install, copy data to it, etc., add the second device later and let 
md

synchronize the two drives.

Because Linux RAID is a separate layer with no knowledge of the data
"above" it has to copy every single bit to the other drive as well
(similar to a dd device-to-device copy), regardless if actually 
needed

or not.

If you are really strapped for space and must do this ZFS can do it 
much

more efficiently, because it controls the entire "stack" and knows
exactly which blocks to copy (besides many other advantages over 
Linux

RAID).

Unfortunately ZFS is slightly more complicated from the packaging 
side,

and installing Debian on a ZFS root is difficult.

It still makes an excelent choice to manage your storage drives,
especially on a stable system, where there is less hassle with the 
dkms

module and it's amazingly simple to use once you familiarise yourself
with the basics.

Kind regards,
Andrei


Sigh, OK I take advice and have a go.
Really I just want to get on and do some drawings or something but I 
think I'll thank myself later if I get proper backup in place.

If after having a quick look am I understanding anything?
Partition and install minimal Debian with no X or anything on just one 
disk.

install headers and zfs-utils.
Add other disk and then what ? To make it a mirror pool (like raid1) 
does zfs take care of the partitions.
Do I want to delete all partitions on other disk first or make like 
for like partitions?


If that's done and I've made a zpool called  "backup" from then on the 
ZFS is nothing to do with the kernel ?

I ask kernel make a directory "my_pc1"
then
"zfs create -o mountpoint=/my_pc1 backup/my_pc1"

I ask kernel make a directory "my_pc2"
then
"zfs create -o mountpoint=/my_pc2 backup/my_pc2"

So then I can copy files from other PC (pc1) to 
"my_backup_pc/backup/my_pc1" and ZFS mirrors the data to other disk in 
pool ?


If that's how it works I'll just need something on the backup_pc and 
the other PCs to automate the backing up.

Is that backup Ninja or something ?



RAID protects against storage device sectors going bad and against
entire storage devices going bad -- e.g. hard disk drives, solid state
drives, etc..


Backups protect against filesystem contents going bad -- e.g. files,
directories, metadata, etc..


While putting an operating system and backups within a single RAID can
be done, this will complicate creation of a ZFS pool and will
complicate disaster preparedness/ recovery procedures.  The following
instructions assume your OS is on one device and that you will
dedicate two HDD's to ZFS.


See "Creating a Mirrored Storage Pool":

https://docs.oracle.com/cd/E19253-01/819-5461/gaynr/index.html


The above URL is good for concepts, but the virtual device names
('c1d0', 'c2d0') are for Solaris.  For Debian, you will want to
zero-fill both HDD's with dd(1) and then create the pool with zpool(8)
using device identity nodes:

/dev/disk/by-id/ata-...


Be extremely careful that you specify the correct devices!


ZFS will mark the drives and create a ZFS pool named 'tank' mounted at
'/tank'.  Note the parallel namespaces -- 'tank'is ZFS namespace and
has no leading slash, while '/tank' is a Unix absolute path.


'/tank' is a ZFS filesystem that can do everything a normal Unix
directory can do.  So, you could create a directory for backups and
create directories for specific machines:

# mkdir /tank/backup

# mkdir /tank/backup/pc1

# mkdir /tank/backup/pc2


Or, you could create a ZFS filesystem for backups and create ZFS
filesystems for specific machines:

# zfs create tank/backup

# zfs create tank/backup/pc1

# zfs create tank/backup/pc2


Both will give you directories that you can put your backups into
using whatever tools you choose, but the latter will give you
additional ZFS capabilities.


David


I know I'm a bit thick about these things, what I'm blocked about is 
where is the OS.

Let's say I have one PC and 2 unpartitioned disks.
Put one disk in PC and install Debian on it.
Install headers and ZFS-utils.
I put other disk in PC, PC boots from first disk.

"zpool create tank mirror disk1 disk2"
Can I then remove disk1 and PC will boot Debian from disk2 ?

mick
--
Key ID

Re: Raid 1

2021-01-23 Thread David Christensen

On 2021-01-23 07:01, mick crane wrote:

On 2021-01-23 12:20, Andrei POPESCU wrote:

On Vi, 22 ian 21, 22:26:46, mick crane wrote:

hello,
I want to tidy things up as suggested.
Have one old PC that I'll put 2 disks in and tidy everything up so 
what's
scattered about is on the running disks and this new/old one is just 
backup

for them.
Can I assume that Debian installer in some expert mode will sort out the
raid or do I need to install to one disk and then mirror it manually 
before

invoking the raid thing ?


The "raid thing" is a separate layer below the partitions and file
systems.

Technically it is possible to create the mirror with just one device (I
believe mdadm calls this "degraded"), partition the md mirror device,
install, copy data to it, etc., add the second device later and let md
synchronize the two drives.

Because Linux RAID is a separate layer with no knowledge of the data
"above" it has to copy every single bit to the other drive as well
(similar to a dd device-to-device copy), regardless if actually needed
or not.

If you are really strapped for space and must do this ZFS can do it much
more efficiently, because it controls the entire "stack" and knows
exactly which blocks to copy (besides many other advantages over Linux
RAID).

Unfortunately ZFS is slightly more complicated from the packaging side,
and installing Debian on a ZFS root is difficult.

It still makes an excelent choice to manage your storage drives,
especially on a stable system, where there is less hassle with the dkms
module and it's amazingly simple to use once you familiarise yourself
with the basics.

Kind regards,
Andrei


Sigh, OK I take advice and have a go.
Really I just want to get on and do some drawings or something but I 
think I'll thank myself later if I get proper backup in place.

If after having a quick look am I understanding anything?
Partition and install minimal Debian with no X or anything on just one 
disk.

install headers and zfs-utils.
Add other disk and then what ? To make it a mirror pool (like raid1) 
does zfs take care of the partitions.
Do I want to delete all partitions on other disk first or make like for 
like partitions?


If that's done and I've made a zpool called  "backup" from then on the 
ZFS is nothing to do with the kernel ?

I ask kernel make a directory "my_pc1"
then
"zfs create -o mountpoint=/my_pc1 backup/my_pc1"

I ask kernel make a directory "my_pc2"
then
"zfs create -o mountpoint=/my_pc2 backup/my_pc2"

So then I can copy files from other PC (pc1) to 
"my_backup_pc/backup/my_pc1" and ZFS mirrors the data to other disk in 
pool ?


If that's how it works I'll just need something on the backup_pc and the 
other PCs to automate the backing up.

Is that backup Ninja or something ?



RAID protects against storage device sectors going bad and against 
entire storage devices going bad -- e.g. hard disk drives, solid state 
drives, etc..



Backups protect against filesystem contents going bad -- e.g. files, 
directories, metadata, etc..



While putting an operating system and backups within a single RAID can 
be done, this will complicate creation of a ZFS pool and will complicate 
disaster preparedness/ recovery procedures.  The following instructions 
assume your OS is on one device and that you will dedicate two HDD's to ZFS.



See "Creating a Mirrored Storage Pool":

https://docs.oracle.com/cd/E19253-01/819-5461/gaynr/index.html


The above URL is good for concepts, but the virtual device names 
('c1d0', 'c2d0') are for Solaris.  For Debian, you will want to 
zero-fill both HDD's with dd(1) and then create the pool with zpool(8) 
using device identity nodes:


/dev/disk/by-id/ata-...


Be extremely careful that you specify the correct devices!


ZFS will mark the drives and create a ZFS pool named 'tank' mounted at 
'/tank'.  Note the parallel namespaces -- 'tank'is ZFS namespace and has 
no leading slash, while '/tank' is a Unix absolute path.



'/tank' is a ZFS filesystem that can do everything a normal Unix 
directory can do.  So, you could create a directory for backups and 
create directories for specific machines:


# mkdir /tank/backup

# mkdir /tank/backup/pc1

# mkdir /tank/backup/pc2


Or, you could create a ZFS filesystem for backups and create ZFS 
filesystems for specific machines:


# zfs create tank/backup

# zfs create tank/backup/pc1

# zfs create tank/backup/pc2


Both will give you directories that you can put your backups into using 
whatever tools you choose, but the latter will give you additional ZFS 
capabilities.



David



Re: Raid 1

2021-01-23 Thread Linux-Fan

mick crane writes:


On 2021-01-23 17:11, Linux-Fan wrote:

mick crane writes:


[...]


Please note that "root on ZFS" is possible but quite complicated:
https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian 
%20Buster%20Root%20on%20ZFS.html


For my current system I actually used mdadm RAID 1 for OS+Swap and ZFS
mirrors for the actual data. This way, I can use the Debian Installer
for installation purposes and benefit from the bit rot protection for
the acutally important data while maintaining basic redundancy for the
OS installation. YMMV.

Here are my notes on essential ZFS commands (in case they might be of help):
https://masysma.lima-city.de/37/zfs_commands_shortref.xhtml


[...]


link is not currently available.
what you seem to be doing there is backing up the data with ZFS but not  
backing up the OS, so I guess your raid is the backup for the OS ?

mick


Both open fine here, which of the links fails for you?

RAID is not Backup! Hence I have entirely separate programs for backup. The  
RAID is only the "quickest" layer -- solely responsible for catching  
problems with randomly failing HDDs and -- for non-OS-data -- bit rot.


My system works as follows

* OS data bitrot is not covered, but OS single HDD failure is.
  I achieve this by having OS and Swap on MDADM RAID 1
  i.e. mirrored but without ZFS.

* Actual data bitrot is covered, as is single HDD failure by
  means of ZFS mirrors for all data.

* Backups are separate. For instance, important data is copied to a
  separate computer upon shutdown. Less important data is part of
  manually-invoked backup tasks which use multiple programs to cope
  with different types of data...

HTH
Linux-Fan

öö


pgpsazBj3_HL2.pgp
Description: PGP signature


Re: Raid 1

2021-01-23 Thread mick crane

On 2021-01-23 17:11, Linux-Fan wrote:

mick crane writes:


On 2021-01-23 12:20, Andrei POPESCU wrote:

On Vi, 22 ian 21, 22:26:46, mick crane wrote:

hello,
I want to tidy things up as suggested.
Have one old PC that I'll put 2 disks in and tidy everything up so 
what's
scattered about is on the running disks and this new/old one is just 
backup

for them.
Can I assume that Debian installer in some expert mode will sort out 
the
raid or do I need to install to one disk and then mirror it manually 
before

invoking the raid thing ?


[...]


Sigh, OK I take advice and have a go.
Really I just want to get on and do some drawings or something but I 
think I'll thank myself later if I get proper backup in place.

If after having a quick look am I understanding anything?
Partition and install minimal Debian with no X or anything on just one 
disk.

install headers and zfs-utils.
Add other disk and then what ? To make it a mirror pool (like raid1) 
does zfs take care of the partitions.
Do I want to delete all partitions on other disk first or make like 
for like partitions?


If I get your scenario correctly you want to install Debian (without
ZFS i.e. not "root on ZFS") and then create a ZFS mirror?

If yes, then as a preparation you need either (a) two entire devices
of ~ same size to use with ZFS or (b) two partitions to use with ZFS.

Say you install as follows:

* sda1: OS
* sda2: Swap
* sda : XX GiB free

* sdb: XX+ Gib free

Then prepare two unformatted partitions:

* sda3: XX GiB "for ZFS"
* sdb1: XX GiB "for ZFS"

and use these devices for ZFS.

If that's done and I've made a zpool called  "backup" from then on the 
ZFS is nothing to do with the kernel ?

I ask kernel make a directory "my_pc1"
then
"zfs create -o mountpoint=/my_pc1 backup/my_pc1"


You can specifiy a mountpoint and it will be created automatically. No
need to pre-create the Directory as with other file systems.


I ask kernel make a directory "my_pc2"
then
"zfs create -o mountpoint=/my_pc2 backup/my_pc2"

So then I can copy files from other PC (pc1) to 
"my_backup_pc/backup/my_pc1" and ZFS mirrors the data to other disk in 
pool ?


Yes. In case you are unsure check the output of `zpool status` to see
the structure as understood by ZFS.

If that's how it works I'll just need something on the backup_pc and 
the other PCs to automate the backing up.

Is that backup Ninja or something ?


I have never used backup Ninja. Depending on your use case anything
from simple rsync to borgbackup may serve :)

Please note that "root on ZFS" is possible but quite complicated:
https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Buster%20Root%20on%20ZFS.html

For my current system I actually used mdadm RAID 1 for OS+Swap and ZFS
mirrors for the actual data. This way, I can use the Debian Installer
for installation purposes and benefit from the bit rot protection for
the acutally important data while maintaining basic redundancy for the
OS installation. YMMV.

Here are my notes on essential ZFS commands (in case they might be of 
help):

https://masysma.lima-city.de/37/zfs_commands_shortref.xhtml

HTH
Linux-Fan

öö


link is not currently available.
what you seem to be doing there is backing up the data with ZFS but not 
backing up the OS, so I guess your raid is the backup for the OS ?

mick


--
Key ID4BFEBB31



Re: Raid 1

2021-01-23 Thread Linux-Fan

mick crane writes:


On 2021-01-23 12:20, Andrei POPESCU wrote:

On Vi, 22 ian 21, 22:26:46, mick crane wrote:

hello,
I want to tidy things up as suggested.
Have one old PC that I'll put 2 disks in and tidy everything up so what's
scattered about is on the running disks and this new/old one is just backup
for them.
Can I assume that Debian installer in some expert mode will sort out the
raid or do I need to install to one disk and then mirror it manually before
invoking the raid thing ?


[...]


Sigh, OK I take advice and have a go.
Really I just want to get on and do some drawings or something but I think  
I'll thank myself later if I get proper backup in place.

If after having a quick look am I understanding anything?
Partition and install minimal Debian with no X or anything on just one disk.
install headers and zfs-utils.
Add other disk and then what ? To make it a mirror pool (like raid1) does  
zfs take care of the partitions.
Do I want to delete all partitions on other disk first or make like for like  
partitions?


If I get your scenario correctly you want to install Debian (without ZFS  
i.e. not "root on ZFS") and then create a ZFS mirror?


If yes, then as a preparation you need either (a) two entire devices of  
~ same size to use with ZFS or (b) two partitions to use with ZFS.


Say you install as follows:

* sda1: OS
* sda2: Swap
* sda : XX GiB free

* sdb: XX+ Gib free

Then prepare two unformatted partitions:

* sda3: XX GiB "for ZFS"
* sdb1: XX GiB "for ZFS"

and use these devices for ZFS.

If that's done and I've made a zpool called  "backup" from then on the ZFS  
is nothing to do with the kernel ?

I ask kernel make a directory "my_pc1"
then
"zfs create -o mountpoint=/my_pc1 backup/my_pc1"


You can specifiy a mountpoint and it will be created automatically. No need  
to pre-create the Directory as with other file systems.



I ask kernel make a directory "my_pc2"
then
"zfs create -o mountpoint=/my_pc2 backup/my_pc2"

So then I can copy files from other PC (pc1) to "my_backup_pc/backup/my_pc1"  
and ZFS mirrors the data to other disk in pool ?


Yes. In case you are unsure check the output of `zpool status` to see the  
structure as understood by ZFS.


If that's how it works I'll just need something on the backup_pc and the  
other PCs to automate the backing up.

Is that backup Ninja or something ?


I have never used backup Ninja. Depending on your use case anything from  
simple rsync to borgbackup may serve :)


Please note that "root on ZFS" is possible but quite complicated:
https://openzfs.github.io/openzfs-docs/Getting%20Started/Debian/Debian%20Buster%20Root%20on%20ZFS.html

For my current system I actually used mdadm RAID 1 for OS+Swap and ZFS  
mirrors for the actual data. This way, I can use the Debian Installer for  
installation purposes and benefit from the bit rot protection for the  
acutally important data while maintaining basic redundancy for the OS  
installation. YMMV.


Here are my notes on essential ZFS commands (in case they might be of help):
https://masysma.lima-city.de/37/zfs_commands_shortref.xhtml

HTH
Linux-Fan

öö


pgpyxBOnc9X6C.pgp
Description: PGP signature


Re: Raid 1

2021-01-23 Thread mick crane

On 2021-01-23 12:20, Andrei POPESCU wrote:

On Vi, 22 ian 21, 22:26:46, mick crane wrote:

hello,
I want to tidy things up as suggested.
Have one old PC that I'll put 2 disks in and tidy everything up so 
what's
scattered about is on the running disks and this new/old one is just 
backup

for them.
Can I assume that Debian installer in some expert mode will sort out 
the
raid or do I need to install to one disk and then mirror it manually 
before

invoking the raid thing ?


The "raid thing" is a separate layer below the partitions and file
systems.

Technically it is possible to create the mirror with just one device (I
believe mdadm calls this "degraded"), partition the md mirror device,
install, copy data to it, etc., add the second device later and let md
synchronize the two drives.

Because Linux RAID is a separate layer with no knowledge of the data
"above" it has to copy every single bit to the other drive as well
(similar to a dd device-to-device copy), regardless if actually needed
or not.

If you are really strapped for space and must do this ZFS can do it 
much

more efficiently, because it controls the entire "stack" and knows
exactly which blocks to copy (besides many other advantages over Linux
RAID).

Unfortunately ZFS is slightly more complicated from the packaging side,
and installing Debian on a ZFS root is difficult.

It still makes an excelent choice to manage your storage drives,
especially on a stable system, where there is less hassle with the dkms
module and it's amazingly simple to use once you familiarise yourself
with the basics.

Kind regards,
Andrei


Sigh, OK I take advice and have a go.
Really I just want to get on and do some drawings or something but I 
think I'll thank myself later if I get proper backup in place.

If after having a quick look am I understanding anything?
Partition and install minimal Debian with no X or anything on just one 
disk.

install headers and zfs-utils.
Add other disk and then what ? To make it a mirror pool (like raid1) 
does zfs take care of the partitions.
Do I want to delete all partitions on other disk first or make like for 
like partitions?


If that's done and I've made a zpool called  "backup" from then on the 
ZFS is nothing to do with the kernel ?

I ask kernel make a directory "my_pc1"
then
"zfs create -o mountpoint=/my_pc1 backup/my_pc1"

I ask kernel make a directory "my_pc2"
then
"zfs create -o mountpoint=/my_pc2 backup/my_pc2"

So then I can copy files from other PC (pc1) to 
"my_backup_pc/backup/my_pc1" and ZFS mirrors the data to other disk in 
pool ?


If that's how it works I'll just need something on the backup_pc and the 
other PCs to automate the backing up.

Is that backup Ninja or something ?

mick


--
Key ID4BFEBB31



Re: Raid 1

2021-01-23 Thread Andrei POPESCU
On Vi, 22 ian 21, 22:26:46, mick crane wrote:
> hello,
> I want to tidy things up as suggested.
> Have one old PC that I'll put 2 disks in and tidy everything up so what's
> scattered about is on the running disks and this new/old one is just backup
> for them.
> Can I assume that Debian installer in some expert mode will sort out the
> raid or do I need to install to one disk and then mirror it manually before
> invoking the raid thing ?

The "raid thing" is a separate layer below the partitions and file 
systems.

Technically it is possible to create the mirror with just one device (I 
believe mdadm calls this "degraded"), partition the md mirror device, 
install, copy data to it, etc., add the second device later and let md 
synchronize the two drives.

Because Linux RAID is a separate layer with no knowledge of the data 
"above" it has to copy every single bit to the other drive as well 
(similar to a dd device-to-device copy), regardless if actually needed 
or not.

If you are really strapped for space and must do this ZFS can do it much 
more efficiently, because it controls the entire "stack" and knows 
exactly which blocks to copy (besides many other advantages over Linux 
RAID).

Unfortunately ZFS is slightly more complicated from the packaging side, 
and installing Debian on a ZFS root is difficult.

It still makes an excelent choice to manage your storage drives, 
especially on a stable system, where there is less hassle with the dkms 
module and it's amazingly simple to use once you familiarise yourself 
with the basics.

Kind regards,
Andrei
-- 
http://wiki.debian.org/FAQsFromDebianUser


signature.asc
Description: PGP signature


Re: Raid 1

2021-01-22 Thread David Christensen

On 2021-01-22 15:10, David Christensen wrote:


A key issue with storage is bit rot.


I should have said "bit rot protection".


David



Re: Raid 1

2021-01-22 Thread David Christensen

On 2021-01-22 14:26, mick crane wrote:

hello,
I want to tidy things up as suggested.
Have one old PC that I'll put 2 disks in and tidy everything up so 
what's scattered about is on the running disks and this new/old one is 
just backup for them.
Can I assume that Debian installer in some expert mode will sort out the 
raid or do I need to install to one disk and then mirror it manually 
before invoking the raid thing ?



I would install a small SSD and do a fresh install of the OS onto that.


I would then install the two HDD's and set up a mirror (RAID 1).  Linux 
options include Multiple Device md(4), Linux Volume Manager lvm(8), and 
ZFS zfs(8).



A key issue with storage is bit rot.  btrfs and ZFS have it. 
dm-integrity (man page?) can provide it for Linux solutions without. 
btrfs requires maintenance.  I did not do it, and my disks suffered. 
ZFS does not require maintenance and has many killer features.  I have 
not tried dm-integrity, but would be interested in reading a HOWTO for 
Debian.



Due to CDDL and GPL licensing conflicts, ZFS is not fully integrated 
into Debian.  ZFS can be installed and used on Debian, but ZFS-on-root 
is not supported by the Debian installer.



The CDDL and BSD licenses are compatible.  So, ZFS is fully integrated 
on FreeBSD, and the FreeBSD installer can do ZFS-on-root.  FreeBSD has 
other features I like.  I use FreeBSD and ZFS on my servers; including 
storage (Samba).



David



Re: Raid 1

2021-01-22 Thread Linux-Fan

mick crane writes:


hello,
I want to tidy things up as suggested.
Have one old PC that I'll put 2 disks in and tidy everything up so what's  
scattered about is on the running disks and this new/old one is just backup  
for them.
Can I assume that Debian installer in some expert mode will sort out the  
raid or do I need to install to one disk and then mirror it manually before  
invoking the raid thing ?


Debian Installer can create MDADM RAID volumes even in non-expert mode.

You need to explicitly select the right options in the installer  
partitioning screen i.e. create the partitions, then create MDADM RAID 1  
devices on top of them and finally let them be formatted with  
ext4/filesystem of choice and be the installation target.


AFAIK "Guided" installation modes do not automatically create RAID, i.e. I  
recommend using the manual partitioning mode.


In the few RAID installs I did, it worked out all of the time.

Only thing to do afterwards is to ensure that GRUB is installed on both of  
the respective devices (dpkg-reconfigure grub-pc assuming BIOS mode).


HTH
Linux-Fan

öö


pgpJCMm17oPdN.pgp
Description: PGP signature


Raid 1

2021-01-22 Thread mick crane

hello,
I want to tidy things up as suggested.
Have one old PC that I'll put 2 disks in and tidy everything up so 
what's scattered about is on the running disks and this new/old one is 
just backup for them.
Can I assume that Debian installer in some expert mode will sort out the 
raid or do I need to install to one disk and then mirror it manually 
before invoking the raid thing ?


mick

--
Key ID4BFEBB31



Re: Raid 1 borked

2020-10-26 Thread Leslie Rhorer




On 10/26/2020 7:55 AM, Bill wrote:

Hi folks,

So we're setting up a small server with a pair of 1 TB hard disks 
sectioned into 5x100GB Raid 1 partition pairs for data,  with 400GB+ 
reserved for future uses on each disk.


	Oh, also, why are you leaving so much unused space on the drives?  One 
of the big advantages of RAID and LVM is the ability to manage storage 
space.  Unmanaged space on drives doesn't serbe much purpose.




Re: Raid 1 borked

2020-10-26 Thread Leslie Rhorer

This might be better handled on linux-r...@vger.kernel.org

On 10/26/2020 10:35 AM, Dan Ritter wrote:

Bill wrote:

So we're setting up a small server with a pair of 1 TB hard disks sectioned
into 5x100GB Raid 1 partition pairs for data,  with 400GB+ reserved for
future uses on each disk.


That's weird, but I expect you have a reason for it.


	It does seem odd.  I am curious what the reasons might be.  Do you mean 
perhaps, rather than RAID 1 pairs on each disk, each partition  is 
paired with the corresponding partition on the other drive?


Also, why so small and so many?


I'm not sure what happened, we had the five pairs of disk partitions set up
properly through the installer without problems. However, now the Raid 1
pairs are not mounted as separate partitions but do show up as
subdirectories under /, ie /datab, and they do seem to work as part of the
regular / filesystem.  df -h does not show any md devices or sda/b devices,
neither does mount. (The system partitions are on an nvme ssd).


Mounts have to happen at mount points, and mount points are
directories. What you have is five mount points and nothing
mounted on them.



lsblk reveals sda and sdb with sda[1-5] and sdb[1-5] but no md[0-5]. blkid
reveals that sda[1-5] and sdb[1-5] are still listed as
TYPE="linux_raid_member".

So first of all I'd like to be able to diagnose what's going on. What
commands should I use for that? And secondly, I'd like to get the raid
arrays remounted as separate partitions. How to do that?


Well, you need to get them assembled and mounted. I'm assuming
you used mdadm.

Start by inspecting /proc/mdstat. Does it show 5 assembled MD
devices? If not:

mdadm -A /dev/md0
mdadm -A /dev/md1
mdadm -A /dev/md2
mdadm -A /dev/md3
mdadm -A /dev/md4

And tell us any errors.


	Perhaps before that (or after), what are the contents of 
/etc/mdadm/mdadm.conf?  Try:


grep -v "#" /etc/mdadm/mdadm.conf


Once they are assembled, mount them:

mount -a

if that doesn't work -- did you remember to list them in
/etc/fstab? Put them in there, something like:

/dev/md0/dataa  ext4defaults0   0

and try again.

-dsr-




Fortunately, there is no data to worry about. However, I'd rather not
reinstall as we've put in a bit of work installing and configuring things.
I'd prefer not to loose that. Can someone help us out?


	Don't fret.  There is rarely, if ever, any need to re-install a system 
to accommodate updates in RAID facilities.  Even if / or /boot are RAID 
arrays - which does not seem to be the case here - one can ordinarily 
manage RAID systems without resorting to a re-install.  I cannot think 
of any reason why a re-install would be required in order to manage a 
mounted file system.  Even if /home is part of a mounted file system 
(other than /, of course), the root user can handle any sort of changes 
to mounted file systems.  This would be especially true in your case, 
where your systems aren't even mounted, yet.  Even in the worst case - 
and yours is far from that - one should ordinarily be able to boot from 
a DVD or a USB drive and manage the system.




Re: Raid 1 borked

2020-10-26 Thread Mark Neyhart
On 10/26/20 4:55 AM, Bill wrote:

> lsblk reveals sda and sdb with sda[1-5] and sdb[1-5] but no md[0-5].
> blkid reveals that sda[1-5] and sdb[1-5] are still listed as
> TYPE="linux_raid_member".
> 
> So first of all I'd like to be able to diagnose what's going on. What
> commands should I use for that? And secondly, I'd like to get the raid
> arrays remounted as separate partitions. How to do that?
> 
    Bill

mdadm will give you some information about which partitions have been
configured as part of a raid device.

mdadm --examine /dev/sda1

It can also report on a raid device

mdadm --detail /dev/md1

If these commands don't report anything, you will need to define the
raid devices again.

Mark



Re: Raid 1 borked

2020-10-26 Thread R. Ramesh

Hi folks,

So we're setting up a small server with a pair of 1 TB hard 
diskssectioned into 5x100GB Raid 1 partition pairs for data, with 
400GB+reserved for future uses on each disk.I'm not sure what 
happened, we had the five pairs of disk partitions setup properly 
through the installer without problems. However, now theRaid 1 pairs 
are not mounted as separate partitions but do show up assubdirectories 
under /, ie /datab, and they do seem to work as part ofthe regular / 
filesystem. df -h does not show any md devices or sda/bdevices, 
neither does mount. (The system partitions are on an nvme ssd).lsblk 
reveals sda and sdb with sda[1-5] and sdb[1-5] but no md[0-5].blkid 
reveals that sda[1-5] and sdb[1-5] are still listed as

TYPE="linux_raid_member".

So first of all I'd like to be able to diagnose what's going on. 
Whatcommands should I use for that? And secondly, I'd like to get the 
raidarrays remounted as separate partitions. How to do 
that?Fortunately, there is no data to worry about. However, I'd rather 
notreinstall as we've put in a bit of work installing and 
configuringthings. I'd prefer not to loose that. Can someone help us out?

Thanks in advance,

Bill


Did you create the md raid1s after partitioning the disks?

Normally when you install mdadm or when you install the system from 
usb/.iso for the first time, the respective mds are assembled and 
appropriately set up if you have already created them.


If you added and partitioned the disk after the main system has been 
installed and running, you will have to create md raid1s and enable 
automatic assembly through /etc/mdadm.conf file. You may need to update 
your initrd also, but this I am not sure. To access and use the md 
raid1s as file systems, You also need to add appropriate fstab entries 
to mount them.


Hope I am not trivializing your issues.

Regards
Ramesh



Re: Raid 1 borked

2020-10-26 Thread Dan Ritter
Bill wrote: 
> So we're setting up a small server with a pair of 1 TB hard disks sectioned
> into 5x100GB Raid 1 partition pairs for data,  with 400GB+ reserved for
> future uses on each disk.

That's weird, but I expect you have a reason for it.

> I'm not sure what happened, we had the five pairs of disk partitions set up
> properly through the installer without problems. However, now the Raid 1
> pairs are not mounted as separate partitions but do show up as
> subdirectories under /, ie /datab, and they do seem to work as part of the
> regular / filesystem.  df -h does not show any md devices or sda/b devices,
> neither does mount. (The system partitions are on an nvme ssd).

Mounts have to happen at mount points, and mount points are
directories. What you have is five mount points and nothing
mounted on them.


> lsblk reveals sda and sdb with sda[1-5] and sdb[1-5] but no md[0-5]. blkid
> reveals that sda[1-5] and sdb[1-5] are still listed as
> TYPE="linux_raid_member".
> 
> So first of all I'd like to be able to diagnose what's going on. What
> commands should I use for that? And secondly, I'd like to get the raid
> arrays remounted as separate partitions. How to do that?

Well, you need to get them assembled and mounted. I'm assuming
you used mdadm.

Start by inspecting /proc/mdstat. Does it show 5 assembled MD
devices? If not:

mdadm -A /dev/md0
mdadm -A /dev/md1
mdadm -A /dev/md2
mdadm -A /dev/md3
mdadm -A /dev/md4

And tell us any errors.

Once they are assembled, mount them:

mount -a

if that doesn't work -- did you remember to list them in
/etc/fstab? Put them in there, something like:

/dev/md0/dataa  ext4defaults0   0

and try again.

-dsr-


> 
> Fortunately, there is no data to worry about. However, I'd rather not
> reinstall as we've put in a bit of work installing and configuring things.
> I'd prefer not to loose that. Can someone help us out?
> 
> Thanks in advance,
> 
>   Bill
> -- 
> Sent using Icedove on Debian GNU/Linux.
> 

-- 
https://randomstring.org/~dsr/eula.html is hereby incorporated by reference.
there is no justice, there is just us.



Raid 1 borked

2020-10-26 Thread Bill

Hi folks,

So we're setting up a small server with a pair of 1 TB hard disks 
sectioned into 5x100GB Raid 1 partition pairs for data,  with 400GB+ 
reserved for future uses on each disk.


I'm not sure what happened, we had the five pairs of disk partitions set 
up properly through the installer without problems. However, now the 
Raid 1 pairs are not mounted as separate partitions but do show up as 
subdirectories under /, ie /datab, and they do seem to work as part of 
the regular / filesystem.  df -h does not show any md devices or sda/b 
devices, neither does mount. (The system partitions are on an nvme ssd).


lsblk reveals sda and sdb with sda[1-5] and sdb[1-5] but no md[0-5]. 
blkid reveals that sda[1-5] and sdb[1-5] are still listed as

TYPE="linux_raid_member".

So first of all I'd like to be able to diagnose what's going on. What 
commands should I use for that? And secondly, I'd like to get the raid 
arrays remounted as separate partitions. How to do that?


Fortunately, there is no data to worry about. However, I'd rather not 
reinstall as we've put in a bit of work installing and configuring 
things. I'd prefer not to loose that. Can someone help us out?


Thanks in advance,

Bill
--
Sent using Icedove on Debian GNU/Linux.



Re: Raid 1

2019-09-25 Thread Pascal Hambourg

Le 25/09/2019 à 09:25, Erwan RIGOLLOT a écrit :


Un raid 1 sur 3 disques fait que les 3 disques ont les données et t'autorise 
donc à perdre jusqu'à 2 disques sans perte de données mais dans ce cas les 3 
disques fonctionneront en permanence et s'userons mais tu n'auras pas de temps 
de reconstruction en cas de perte d'un disque.


Si, il y aura de toute façon un temps de reconstruction quand le disque 
défaillant sera remplacé.


PS : bizarre que ce message soit arrivé plusieurs heures après avoir été 
envoyé.




RE: Raid 1

2019-09-25 Thread Erwan RIGOLLOT
Hello,

Si tu choisis de mettre le disque en spare, il ne se mettra en marche que quand 
un autre tombera en panne.
C’est-à-dire qu'il ne s'usera pas en fonctionnement normal mais cela engendrera 
un temps de reconstruction du raid (les données devront être copiées dessus).

Un raid 1 sur 3 disques fait que les 3 disques ont les données et t'autorise 
donc à perdre jusqu'à 2 disques sans perte de données mais dans ce cas les 3 
disques fonctionneront en permanence et s'userons mais tu n'auras pas de temps 
de reconstruction en cas de perte d'un disque.

C'est un choix à faire ...

Bonne journée !

Erwan
-Message d'origine-
De : steve  
Envoyé : mercredi 25 septembre 2019 09:07
À : duf 
Objet : Raid 1

Bonjour,

J'ai trois disques que je souhaiterais monter en Raid 1.

J'ai le choix entre créer une grappe de deux disque plus un spare ou alors 
créer une grappe de trois disques sans spare.

Qu'est-ce qui est le mieux ?

Merci

Steve



Re: Raid 1

2019-09-25 Thread Pascal Hambourg

Le 25/09/2019 à 12:20, Pascal Hambourg a écrit :

Le 25/09/2019 à 11:39, steve a écrit :


L'argument du spare qui ne travaille pas, et donc ne s'use pas, est un
bon argument, par exemple.


Oui. Du moins qui s'use moins que s'il travaillait, mais plus que s'il 
était sur une étagère. Il reste sous tension et soumis à la chaleur de 
la machine.


Un autre argument est le temps de reconstruction. Avec les disques de 
très grande capacité (qui augmente plus vite que le débit), la 
reconstruction prend de plus en plus de temps - plusieurs heures - et le 
risque que le seul disque actif restant, qui a subi la même usure, 
flanche à son tour avant la fin n'est pas négligeable, d'autant plus 
qu'il est plus fortement sollicité lors de la reconstruction qu'en temps 
normal.


Et si la performance importe, le RAID 1 peut faire de la répartition de 
charge en lecture entre tous les disques actifs.




Re: Raid 1

2019-09-25 Thread Pascal Hambourg

Le 25/09/2019 à 11:39, steve a écrit :

Le 25-09-2019, à 10:12:49 +0200, Pascal Hambourg a écrit :


Le 25/09/2019 à 09:07, steve a écrit :

Bonjour,

J'ai trois disques que je souhaiterais monter en Raid 1.

J'ai le choix entre créer une grappe de deux disque plus un spare ou
alors créer une grappe de trois disques sans spare.

Qu'est-ce qui est le mieux ?


Le mieux pour quoi ?


Pour moi.


Je parlais du critère d'optimisation choisi.


L'argument du spare qui ne travaille pas, et donc ne s'use pas, est un
bon argument, par exemple.


Oui. Du moins qui s'use moins que s'il travaillait, mais plus que s'il 
était sur une étagère. Il reste sous tension et soumis à la chaleur de 
la machine.


Un autre argument est le temps de reconstruction. Avec les disques de 
très grande capacité (qui augmente plus vite que le débit), la 
reconstruction prend de plus en plus de temps - plusieurs heures - et le 
risque que le seul disque actif restant, qui a subi la même usure, 
flanche à son tour avant la fin n'est pas négligeable, d'autant plus 
qu'il est plus fortement sollicité lors de la reconstruction qu'en temps 
normal.




Re: Raid 1

2019-09-25 Thread steve

Le 25-09-2019, à 11:21:42 +0200, Jean-Michel OLTRA a écrit :


Le mercredi 25 septembre 2019, steve a écrit...

J'ai le choix entre créer une grappe de deux disque plus un spare ou

Je fonctionne comme ça depuis des années. Si un disque présente des
faiblesses, ou une partie de disque, le spare prend le relais. Ce qui laisse
le temps de racheter un autre disque pour reconstituer l'ensemble.


Moi aussi. Cependant, j'ai eu un soucis il y a quelques temps avec mon
système et je pensais que la source était un des disques de la grappe.
Je l'ai donc sorti de la grappe mais le soucis persistait. J'ai
finalement trouvé le problème, qui n'était pas du tout lié au disque en
question. Par paresse, j'ai laissé l'été passer, et je viens de remettre
ce disque dans la grappe sans me souvenir qu'il était en spare à
l'origine.

J'ai donc maintenant une grappe avec 3 disques et plus de spare. D'où ma
question.

Je pense comme toi que la solution 2 disques + spare est une bonne
option. Avant de retirer un des disques de la grappe et de le remettre
en spare, je voulais avoir le sentiment de la liste.



Re: Raid 1

2019-09-25 Thread Jean-Michel OLTRA


Bonjour,


Le mercredi 25 septembre 2019, steve a écrit...


> J'ai le choix entre créer une grappe de deux disque plus un spare ou

Je fonctionne comme ça depuis des années. Si un disque présente des
faiblesses, ou une partie de disque, le spare prend le relais. Ce qui laisse
le temps de racheter un autre disque pour reconstituer l'ensemble.


-- 
jm



Re: Raid 1

2019-09-25 Thread steve

Le 25-09-2019, à 10:12:49 +0200, Pascal Hambourg a écrit :


Le 25/09/2019 à 09:07, steve a écrit :

Bonjour,

J'ai trois disques que je souhaiterais monter en Raid 1.

J'ai le choix entre créer une grappe de deux disque plus un spare ou
alors créer une grappe de trois disques sans spare.

Qu'est-ce qui est le mieux ?


Le mieux pour quoi ?


Pour moi.


Le mieux dans l'absolu, ça n'existe pas.


On est bien d'accord, ma question était une question ouverte.

L'argument du spare qui ne travaille pas, et donc ne s'use pas, est un
bon argument, par exemple.





Re: Raid 1

2019-09-25 Thread Pascal Hambourg

Le 25/09/2019 à 10:14, kaliderus a écrit :



Si tu veux de la redondance (donc du Raid 1),


Non. On peut aussi avoir de la redondance avec du RAID 4, 5, 6 ou 10.


il te faut un disque de parité.


Non, pas forcément. Les RAID 1 et 10 n'ont pas de parité. Seul le RAID 4 
a un disque de parité dédié. Les RAID 5 et 6 ont une parité répartie sur 
tous les disques actifs.



" une grappe de trois disques sans spare " je ne sais pas trop ce que
c'est ; à priori du Raid 0


Tu confirmes que tu ne sais pas de quoi tu parles et tu racontes 
n'importe quoi.




RE: Raid 1

2019-09-25 Thread Erwan RIGOLLOT
Heu, je ne suis pas d'accord avec toi.
Un disque de spare c'est un disque inactif.
On peut faire un raid 1 sur 3 disques et les trois sont actifs et contiennent 
les données et donc aucun spare.

-Message d'origine-
De : kaliderus  
Envoyé : mercredi 25 septembre 2019 10:14
À : duf 
Objet : Re: Raid 1

Le mer. 25 sept. 2019 à 09:07, steve  a écrit :
>
> Bonjour,
>
> J'ai trois disques que je souhaiterais monter en Raid 1.
>
> J'ai le choix entre créer une grappe de deux disque plus un spare ou 
> alors créer une grappe de trois disques sans spare.
Si tu veux de la redondance (donc du Raid 1), il te faut un disque de parité.

>
> Qu'est-ce qui est le mieux ?
Lire les documentations associées afin de comprendre les différentes 
architectures :-) " une grappe de trois disques sans spare " je ne sais pas 
trop ce que c'est ; à priori du Raid 0 qui n'as de " redondant " que le nom, et 
qui dans les faits n'est que de globaliser 3 disques en une seule unité 
logique, et si tu perd un disque tu perd tout, l'anti-thèse de la notion de 
dedondance.

Bon amusement.



Re: Raid 1

2019-09-25 Thread Eric Degenetais
bonjour

Le mer. 25 sept. 2019 à 10:14, kaliderus  a écrit :
>
> Le mer. 25 sept. 2019 à 09:07, steve  a écrit :
> >
> > Bonjour,
> >
> > J'ai trois disques que je souhaiterais monter en Raid 1.
> >
> > J'ai le choix entre créer une grappe de deux disque plus un spare ou
> > alors créer une grappe de trois disques sans spare.
> Si tu veux de la redondance (donc du Raid 1), il te faut un disque de parité.
>
> >
> > Qu'est-ce qui est le mieux ?
> Lire les documentations associées afin de comprendre les différentes
> architectures :-)
> " une grappe de trois disques sans spare " je ne sais pas trop ce que
> c'est ; à priori du Raid 0 qui n'as de " redondant " que le nom, et
Non : ça peut être du raid1 (mirroring) qui met l'accent sur la
redondance (3 copies), en sacrifiant la capacité (3 disques pour
stocker un volume de données équivalent au plus petit d'entre eux, ou
à un seul d'netre eux s'ils sont identiques).
> qui dans les faits n'est que de globaliser 3 disques en une seule
> unité logique, et si tu perd un disque tu perd tout, l'anti-thèse de
Donc non, dans ce cas tu peux aller jusqu'à en perdre deux (en faisant
abstraction des risques de corruption, pour lesquels il faut comparer
au moins trois disques entre eux).
> la notion de dedondance.
>
> Bon amusement.
>

Cordialement
__
Éric Dégenètais
Henix

http://www.henix.com
http://www.squashtest.org



Re: Raid 1

2019-09-25 Thread kaliderus
Le mer. 25 sept. 2019 à 09:07, steve  a écrit :
>
> Bonjour,
>
> J'ai trois disques que je souhaiterais monter en Raid 1.
>
> J'ai le choix entre créer une grappe de deux disque plus un spare ou
> alors créer une grappe de trois disques sans spare.
Si tu veux de la redondance (donc du Raid 1), il te faut un disque de parité.

>
> Qu'est-ce qui est le mieux ?
Lire les documentations associées afin de comprendre les différentes
architectures :-)
" une grappe de trois disques sans spare " je ne sais pas trop ce que
c'est ; à priori du Raid 0 qui n'as de " redondant " que le nom, et
qui dans les faits n'est que de globaliser 3 disques en une seule
unité logique, et si tu perd un disque tu perd tout, l'anti-thèse de
la notion de dedondance.

Bon amusement.



Re: Raid 1

2019-09-25 Thread Pascal Hambourg

Le 25/09/2019 à 09:07, steve a écrit :

Bonjour,

J'ai trois disques que je souhaiterais monter en Raid 1.

J'ai le choix entre créer une grappe de deux disque plus un spare ou
alors créer une grappe de trois disques sans spare.

Qu'est-ce qui est le mieux ?


Le mieux pour quoi ?
Le mieux dans l'absolu, ça n'existe pas.



Raid 1

2019-09-25 Thread steve

Bonjour,

J'ai trois disques que je souhaiterais monter en Raid 1.

J'ai le choix entre créer une grappe de deux disque plus un spare ou
alors créer une grappe de trois disques sans spare.

Qu'est-ce qui est le mieux ?

Merci

Steve



Re: Install stretch in existing Raid 1 partition

2017-12-01 Thread deloptes
Marc Auslander wrote:

> The installer manual is silent about installing in an existing raid
> partition.  I could follow my nose but wondered if there is any advice
> you can provide.

Does this help https://wiki.debian.org/DebianInstaller/SoftwareRaidRoot

regards



Install stretch in existing Raid 1 partition

2017-11-30 Thread Marc Auslander
I have a debian system with three raid 1 partitions, root and 2 data 
partitions.  It's x86 and I've decided to clean install amd64 and face 
the music of re configuring everything.  My plan/hope is to install a 
new amd64 stretch in my root partition and then clean up the mess.


The installer manual is silent about installing in an existing raid 
partition.  I could follow my nose but wondered if there is any advice 
you can provide.


(Note that I have never had any issues booting from the raid root.  its 
md1.2.)




Re: installation Buster et raid 1

2017-11-06 Thread Pascal Hambourg

Le 06/11/2017 à 11:38, Kohler Gerard a écrit :

bonjour,

je suis actuellement en Stretch avec un raid 1 pour ma partition /home

j'aimerai passer en debian buster, sans perdre mes données ;-)


Si l'installateur de buster est comme les versions précédentes, il 
détectera et assemblera automatiquement tes ensembles RAID et tu pourras 
en faire ce que bon te semble. Par exemple en monter un sur /home. Il 
faudra juste faire attention à ne pas marquer le volume "à formater".


Débrancher un des disques du RAID pendant l'installation, c'est courir 
le risque de devoir lancer une resynchronisation de 4 To (c'est long) 
lorsque le disque sera rebranché.




Re: installation Buster et raid 1

2017-11-06 Thread Kohler Gerard

effectivement ce n'est pas l'idéal,
j'utilise un RAID 1 avec 2 DD de 4To pour mes données, et je sauvegarde 
sur un NAS
je n'ai pas mis / sur le RAID pour des raisons historiques : j'ai 
construit mon RAID après une perte de données brutale lors d'une 
sauvegarde sur mon NAS : DD ayant une panne de carte, avec corruption de 
toutes les données sur le disque et sur le NAS, irrécupérable :'(
plus de 3 To partis en fumée, heureusement j'ai pu récupérer une partie 
de mes données sur mon portable.

Ceci dit, la même panne peut de nouveau arriver, et tout corrompre
 merci de m'avoir confirmer la méthodologie

Gerard

Le 06/11/2017 à 11:59, daniel huhardeaux a écrit :

Le 06/11/2017 à 11:38, Kohler Gerard a écrit :

bonjour,

je suis actuellement en Stretch avec un raid 1 pour ma partition /home

j'aimerai passer en debian buster, sans perdre mes données ;-)

ma table de partition :

/dev/sda1 linux-raid /dev/md0

/dev/sda2 linux-raid /dev/md1

/dev/sdb1 linux-raid /dev/md0

/dev/sdb2 linux-raid /dev/md1

/dev/md0 linux-swap

/dev/md1 ext3 /home

/dev/sdc1 fat32 /boot/efi

/dev/sdc2 ext4 /

/dev/sdc3 ext4 partition prévue pour installer buster


j'avoue que je stresse un peu pour l'installation de buster

quelle stratégie pour que sous buster je retrouve mon raid sans 
perdre mes données


Vu que sdc est un 3eme disque, si tu ne dis rien à l'installation tout 
se passera sur ce disque. Tu pourras ensuite, une fois buster 
installé, modifier fstab afin d'y placer ton /home qui est en raid. 
Une autre précaution si tu insistes ;) est de débrancher sdb avant 
l'installation, tu auras ainsi un disque de secours de ton raid pour 
le cas ou.


Cela dit, si tu fais attention, pas de soucis, tu peux donner ton home 
raid lors de l'installation, suffit de ne pas demander le formatage.


Pour information, pourquoi / n'est pas en raid? J'ai l'impression que 
tu utilises le raid de ton /home comme sauvegarde. Si c'est le cas, ce 
n'est pas IMHO une bonne approche.






Re: installation Buster et raid 1

2017-11-06 Thread Kohler Gerard

merci,

sauvegarde sur un NAS tous les jours, donc pas de problème, mais 
rapatrier 3 To de données si on peut éviter !




Le 06/11/2017 à 12:06, Pierre L. a écrit :

Bonjour Gérard,

Déjà étape n°1 si je peux me permettre, la sauvegarde des données si
cela n'a pas encore été réalisé :)

Je laisse la parole aux connaisseurs pour la suite ;)



Le 06/11/2017 à 11:38, Kohler Gerard a écrit :

quelle stratégie pour que sous buster je retrouve mon raid sans perdre
mes données






Re: installation Buster et raid 1

2017-11-06 Thread Pierre L.
Bonjour Gérard,

Déjà étape n°1 si je peux me permettre, la sauvegarde des données si
cela n'a pas encore été réalisé :)

Je laisse la parole aux connaisseurs pour la suite ;)



Le 06/11/2017 à 11:38, Kohler Gerard a écrit :
> quelle stratégie pour que sous buster je retrouve mon raid sans perdre
> mes données




signature.asc
Description: OpenPGP digital signature


Re: installation Buster et raid 1

2017-11-06 Thread daniel huhardeaux

Le 06/11/2017 à 11:38, Kohler Gerard a écrit :

bonjour,

je suis actuellement en Stretch avec un raid 1 pour ma partition /home

j'aimerai passer en debian buster, sans perdre mes données ;-)

ma table de partition :

/dev/sda1 linux-raid /dev/md0

/dev/sda2 linux-raid /dev/md1

/dev/sdb1 linux-raid /dev/md0

/dev/sdb2 linux-raid /dev/md1

/dev/md0 linux-swap

/dev/md1 ext3 /home

/dev/sdc1 fat32 /boot/efi

/dev/sdc2 ext4 /

/dev/sdc3 ext4 partition prévue pour installer buster


j'avoue que je stresse un peu pour l'installation de buster

quelle stratégie pour que sous buster je retrouve mon raid sans perdre 
mes données


Vu que sdc est un 3eme disque, si tu ne dis rien à l'installation tout 
se passera sur ce disque. Tu pourras ensuite, une fois buster installé, 
modifier fstab afin d'y placer ton /home qui est en raid. Une autre 
précaution si tu insistes ;) est de débrancher sdb avant l'installation, 
tu auras ainsi un disque de secours de ton raid pour le cas ou.


Cela dit, si tu fais attention, pas de soucis, tu peux donner ton home 
raid lors de l'installation, suffit de ne pas demander le formatage.


Pour information, pourquoi / n'est pas en raid? J'ai l'impression que tu 
utilises le raid de ton /home comme sauvegarde. Si c'est le cas, ce 
n'est pas IMHO une bonne approche.


--
Daniel



installation Buster et raid 1

2017-11-06 Thread Kohler Gerard

bonjour,

je suis actuellement en Stretch avec un raid 1 pour ma partition /home

j'aimerai passer en debian buster, sans perdre mes données ;-)

ma table de partition :

/dev/sda1 linux-raid /dev/md0

/dev/sda2 linux-raid /dev/md1

/dev/sdb1 linux-raid /dev/md0

/dev/sdb2 linux-raid /dev/md1

/dev/md0 linux-swap

/dev/md1 ext3 /home

/dev/sdc1 fat32 /boot/efi

/dev/sdc2 ext4 /

/dev/sdc3 ext4 partition prévue pour installer buster


j'avoue que je stresse un peu pour l'installation de buster

quelle stratégie pour que sous buster je retrouve mon raid sans perdre 
mes données



merci de votre aide


Gerard



Re: Hot swapping failed disk /dev/sda in RAID 1 array

2016-07-20 Thread Urs Thuermann
Peter Ludikovsky  writes:

> Ad 1: Yes, the SATA controller has to support Hot-Swap. You _can_ remove
> the device nodes by running
> # echo 1 > /sys/block//device/delete

Thanks, I have now my RAID array fully working again.  This is what I
have done:

1. Like you suggested above I deleted the drive (/dev/sda* and entries
   in /proc/partitions)

echo 1 > /sys/block/sda/device/delete

2. Hotplug-added the new drive.  Obviously, my controller doesn't
   support or isn't configured to notify the kernel.  Using Google I
   found the command the have the kernel rescan for drives:

echo "- - -" > /sys/class/scsi_host/host0/scan

3. The rest is straight-forward:

fdisk /dev/sda  [Add partition /dev/sda1 with type 0xfd]
mdadm /dev/md0 --add /dev/sda1
update-grub

Now, everything is up again and both drives synced, without reboot:

# cat /proc/mdstat 
Personalities : [raid1] 
md0 : active raid1 sda1[2] sdb1[1]
  1953381376 blocks super 1.2 [2/2] [UU]
  bitmap: 1/15 pages [4KB], 65536KB chunk

unused devices: 
# uptime
 11:49:01 up 106 days, 22:44, 23 users,  load average: 0.13, 0.19, 0.15

I only wonder if it's normal that the drives are numbered 2 and 1
instead of 0 and 1.

> Ad 2: Depends on the controller, see 1. It might recognize the new
> drive, or not. It might see the correct device, or not.

Next time I reboot the machine I will check whether there are any BIOS
settings to make the controller support hot-plugging.

urs



Re: Hot swapping failed disk /dev/sda in RAID 1 array

2016-07-19 Thread Pascal Hambourg

Le 19/07/2016 à 16:01, Urs Thuermann a écrit :


   Shouldn't the device nodes and entries in /proc/partitions
   disappear when the drive is pulled?  Or does the BIOS or the SATA
   controller have to support this?

2. Can I hotplug the new drive and rebuild the RAID array?


As others replied, the SATA controller must support hot-plug, but also 
must be configured in AHCI mode in the BIOS settings so that the kernel 
is notified when a device is added or removed.




Re: Hot swapping failed disk /dev/sda in RAID 1 array

2016-07-19 Thread Andy Smith
Hi Urs,

On Tue, Jul 19, 2016 at 04:01:39PM +0200, Urs Thuermann wrote:
> 2. Can I hotplug the new drive and rebuild the RAID array?

It should work, if your SATA port supports hotplug. Plug the new
drive in and see if the new device node appears. If it does then
you're probably good to go.

You can dump out the partition table from an existing drive with
something like:

# sfdisk -d /dev/sdb > sdb.out

And then partition the new drive the same with something like:

# sfdisk /dev/sdc < sdb.out

(assuming sdb is your working existing drive and sdc is the device
node of the new drive)

Then add the new device to the md with something like:

# mdadm /dev/md0 --add /dev/sdc1

(assuming your array is md0; adjust to suit)

At that point /proc/mdstat should show a rebuild taking place.

If you run into difficulty try asking on the linux-raid mailing list
- it's very good for support and it's best to ask there before doing
anything that you have the slightest doubt about!

Cheers,
Andy

-- 
http://bitfolk.com/ -- No-nonsense VPS hosting



Re: Hot swapping failed disk /dev/sda in RAID 1 array

2016-07-19 Thread Peter Ludikovsky
Ad 1: Yes, the SATA controller has to support Hot-Swap. You _can_ remove
the device nodes by running
# echo 1 > /sys/block//device/delete

Ad 2: Depends on the controller, see 1. It might recognize the new
drive, or not. It might see the correct device, or not.

Ad 3: As long as the second HDD is within the BIOS boot order, that
should work.

Regards,
/peter

Am 19.07.2016 um 16:01 schrieb Urs Thuermann:
> In my RAID 1 array /dev/md0 consisting of two SATA drives /dev/sda1
> and /dev/sdb1 the first drive /dev/sda has failed.  I have called
> mdadm --fail and mdadm --remove on that drive and then pulled the
> cables and removed the drive.  The RAID array continues to work fine
> but in degraded mode.
> 
> I have some questions:
> 
> 1. The block device nodes /dev/sda and /dev/sda1 still exist and the
>partitions are still listed in /proc/partitions.
> 
>That causes I/O errors when running LVM tools or fdisk -l or other
>tools that try to access/scan all block devices.
> 
>Shouldn't the device nodes and entries in /proc/partitions
>disappear when the drive is pulled?  Or does the BIOS or the SATA
>controller have to support this?
> 
> 2. Can I hotplug the new drive and rebuild the RAID array?  Since
>removal of the old drive seems not to be detected I wonder if the
>new drive will be detected correctly.  Will the kernel continue
>with the old drive's size and partitioning, as is still found in
>/proc/partitions?  Will a call
> 
> blockdev --rereadpt /dev/sda
> 
>help?
> 
> 3. Alternativley, I could reboot the system.  I have called
> 
> grub-install /dev/sdb
> 
>and hope this suffices to make the system bootable again.
>Would that be safer?
> 
> Any other suggestions?
> 
> 
> urs
> 



signature.asc
Description: OpenPGP digital signature


Hot swapping failed disk /dev/sda in RAID 1 array

2016-07-19 Thread Urs Thuermann
In my RAID 1 array /dev/md0 consisting of two SATA drives /dev/sda1
and /dev/sdb1 the first drive /dev/sda has failed.  I have called
mdadm --fail and mdadm --remove on that drive and then pulled the
cables and removed the drive.  The RAID array continues to work fine
but in degraded mode.

I have some questions:

1. The block device nodes /dev/sda and /dev/sda1 still exist and the
   partitions are still listed in /proc/partitions.

   That causes I/O errors when running LVM tools or fdisk -l or other
   tools that try to access/scan all block devices.

   Shouldn't the device nodes and entries in /proc/partitions
   disappear when the drive is pulled?  Or does the BIOS or the SATA
   controller have to support this?

2. Can I hotplug the new drive and rebuild the RAID array?  Since
   removal of the old drive seems not to be detected I wonder if the
   new drive will be detected correctly.  Will the kernel continue
   with the old drive's size and partitioning, as is still found in
   /proc/partitions?  Will a call

blockdev --rereadpt /dev/sda

   help?

3. Alternativley, I could reboot the system.  I have called

grub-install /dev/sdb

   and hope this suffices to make the system bootable again.
   Would that be safer?

Any other suggestions?


urs



Re: Restauration d'une config LVM / mdadm raid 1

2016-02-03 Thread Damien TOURDE
Bonjour,

On 03/02/2016 00:05, Pascal Hambourg wrote:
> /dev/md0: TYPE="promise_fasttrack_raid_member"
> A mon avis, je le répète, c'est de ce côté qu'il faut chercher.
>
>
>
Je pense que tu as raison Pascal, je n'avais pas saisi la signification
de "promise".

Il y a fort longtemps, du temps où j'ignorais la "débilité", et le
risque, du raid pseudo-hardware (ce sont des disques de 150Go tout de
même, il y a plus récent), ces disques étaient utilisés avec un fakeraid
d'une CM MSI.

Depuis, je l'ai ai utilisés sur un petit serveur en RAID mdadm, sans
gros soucis, et à nouveau en RAID mdadm aujourd'hui en attendant de
renouveler les disques pour plus gros.
J'ai arrêté de les mettre sur le serveur car, d'une part, le serveur
n'est plus, et d'autre part, il "grattent" fort donc boucan du diable.

Soit je les ai mal formatés et ils contiennent encore des superblocks de
l'ancien raid géré par le chipset de la vieille CM, soit j'ai fait une
boulette dans la config de ma CM actuelle, je vais creuser là dessus.


Mais je ne trouve pas de cas similaires ou approchant sur Google, il va
falloir faire ça "à l'ancienne", si j'y arrive...


En revanche, je n'avais pas rencontré de soucis sur mon ancien serveur
au niveau du boot... et là, c'est pas logique...



Re: Restauration d'une config LVM / mdadm raid 1

2016-02-02 Thread Damien TOURDE
Si ça peut donner une piste, je n'ai pas trace de mon raid dans :

root@olorin-fixe:~# ls -al /dev/disk/by-uuid/
total 0
drwxr-xr-x 2 root root 120 févr.  2 19:02 .
drwxr-xr-x 5 root root 100 févr.  2 19:02 ..
lrwxrwxrwx 1 root root  10 févr.  2 19:02
431f08fe-abcf-4c69-909e-0433a5626906 -> ../../sda1
lrwxrwxrwx 1 root root  10 févr.  2 19:02
75f98820-d831-4285-9a38-c2a621f52d49 -> ../../dm-0
lrwxrwxrwx 1 root root  10 févr.  2 19:02
829e1d02-f563-48f0-a042-e95ef5cd1b15 -> ../../dm-1
lrwxrwxrwx 1 root root  10 févr.  2 19:02
a0a69a8d-080f-4535-87d0-4b91261e854a -> ../../dm-2

root@olorin-fixe:~# blkid
/dev/sdc1: UUID="f84fe148-a775-eac4-76ff-776e5845be39"
UUID_SUB="5cacc338-609c-442a-2fcb-cde38f976d58" LABEL="olorin-fixe:0"
TYPE="linux_raid_member" PARTUUID="40988f99-01"
/dev/sdb1: UUID="f84fe148-a775-eac4-76ff-776e5845be39"
UUID_SUB="c522994f-024d-e113-5b30-8c864aad35d8" LABEL="olorin-fixe:0"
TYPE="linux_raid_member" PARTUUID="2600ee9a-01"
/dev/sda1: UUID="431f08fe-abcf-4c69-909e-0433a5626906" TYPE="ext2"
PARTUUID="a89006b2-01"
/dev/sda5: UUID="4sm0Ld-dD6D-scQm-53Lp-BjrT-tFmd-bdPwAV"
TYPE="LVM2_member" PARTUUID="a89006b2-05"
/dev/mapper/olorin--fixe--vg-root:
UUID="75f98820-d831-4285-9a38-c2a621f52d49" TYPE="ext4"
/dev/mapper/olorin--fixe--vg-swap_1:
UUID="829e1d02-f563-48f0-a042-e95ef5cd1b15" TYPE="swap"
/dev/md0: TYPE="promise_fasttrack_raid_member"
/dev/mapper/olorin--fixe--vg-home:
UUID="a0a69a8d-080f-4535-87d0-4b91261e854a" TYPE="ext4"


Pour rappel :

sda -> SSD système avec LVM
sdb+sdc -> RAID 1 mdadm avec LVM (storage)

On 02/02/2016 20:25, Damien TOURDE wrote:
> Bonsoir,
>
> Alors petite rectif, c'est avec "vgimport -a" que mon volume réapparaît.
> Alors que je n'ai jamais fait de vgexport de mes volumes...
>
> D'ailleurs, il me le confirme :
>
> root@olorin-fixe:~# vgimport -a
>   Volume group "olorin-fixe-vg" is not exported
>   Volume group "olorin-fixe-storage" is not exported
>
>
> En revanche, lorsque je remets mon volume dans fstab, je boot en mode
> single-user (avec systemd qui attends 1m30s que le disque réponde).
>
> Voici le log (tronqué) du boot single-user :
> PS: si vous avez besoin du log complet... je le mettrais, mais bon, un
> log de boot c'est un peu gros en
> mail !
>
> [...]
>
> scsi 2:0:0:0: Direct-Access ATA  HDS722516VLSA80  A6MA PQ: 0 ANSI: 5
> févr. 02 18:58:08 olorin-fixe kernel: ata6: SATA link down (SStatus 4
> SControl 300)
> févr. 02 18:58:08 olorin-fixe kernel: scsi 4:0:0:0: Direct-Access
> ATA  HDS722516VLSA80  A6MA PQ: 0 ANSI: 5
> févr. 02 18:58:08 olorin-fixe kernel: sd 1:0:0:0: [sda] 976773168
> 512-byte logical blocks: (500 GB/465 GiB)
> févr. 02 18:58:08 olorin-fixe kernel: sd 2:0:0:0: [sdb] 321672960
> 512-byte logical blocks: (164 GB/153 GiB)
> févr. 02 18:58:08 olorin-fixe kernel: sd 2:0:0:0: [sdb] Write Protect is off
> févr. 02 18:58:08 olorin-fixe kernel: sd 2:0:0:0: [sdb] Mode Sense: 00
> 3a 00 00
> févr. 02 18:58:08 olorin-fixe kernel: sd 1:0:0:0: [sda] Write Protect is off
> févr. 02 18:58:08 olorin-fixe kernel: sd 1:0:0:0: [sda] Mode Sense: 00
> 3a 00 00
> févr. 02 18:58:08 olorin-fixe kernel: sd 2:0:0:0: [sdb] Write cache:
> enabled, read cache: enabled, doesn't support DPO or FUA
> févr. 02 18:58:08 olorin-fixe kernel: sd 1:0:0:0: [sda] Write cache:
> enabled, read cache: enabled, doesn't support DPO or FUA
> févr. 02 18:58:08 olorin-fixe kernel: sd 4:0:0:0: [sdc] 321672960
> 512-byte logical blocks: (164 GB/153 GiB)
> févr. 02 18:58:08 olorin-fixe kernel: sd 4:0:0:0: [sdc] Write Protect is off
> févr. 02 18:58:08 olorin-fixe kernel: sd 4:0:0:0: [sdc] Mode Sense: 00
> 3a 00 00
> févr. 02 18:58:08 olorin-fixe kernel: sd 4:0:0:0: [sdc] Write cache:
> enabled, read cache: enabled, doesn't support DPO or FUA
> févr. 02 18:58:08 olorin-fixe kernel:  sda: sda1 sda2 < sda5 >
> févr. 02 18:58:08 olorin-fixe kernel: sd 1:0:0:0: [sda] Attached SCSI disk
> févr. 02 18:58:08 olorin-fixe kernel: e1000e :00:1f.6 eth0:
> registered PHC clock
> févr. 02 18:58:08 olorin-fixe kernel: e1000e :00:1f.6 eth0: (PCI
> Express:2.5GT/s:Width x1) 30:5a:3a:83:4f:e6
> févr. 02 18:58:08 olorin-fixe kernel: e1000e :00:1f.6 eth0: Intel(R)
> PRO/1000 Network Connection
> févr. 02 18:58:08 olorin-fixe kernel: e1000e :00:1f.6 eth0: MAC: 12,
> PHY: 12, PBA No: FF-0FF
> févr. 02 18:58:08 olorin-fixe kernel: e1000e :00:1f.6 enp0s31f6:
> renamed from eth0
> févr. 02 18:58:08 olorin-fixe kernel:  sdb: sdb1
> févr. 02 18:58:08 olorin-fixe kernel: sd 2:0:0:0: [sdb] Attached SCSI disk
&

Re: Restauration d'une config LVM / mdadm raid 1

2016-02-02 Thread Damien TOURDE
Bonsoir,

Alors petite rectif, c'est avec "vgimport -a" que mon volume réapparaît.
Alors que je n'ai jamais fait de vgexport de mes volumes...

D'ailleurs, il me le confirme :

root@olorin-fixe:~# vgimport -a
  Volume group "olorin-fixe-vg" is not exported
  Volume group "olorin-fixe-storage" is not exported


En revanche, lorsque je remets mon volume dans fstab, je boot en mode
single-user (avec systemd qui attends 1m30s que le disque réponde).

Voici le log (tronqué) du boot single-user :
PS: si vous avez besoin du log complet... je le mettrais, mais bon, un
log de boot c'est un peu gros en
mail !

[...]

scsi 2:0:0:0: Direct-Access ATA  HDS722516VLSA80  A6MA PQ: 0 ANSI: 5
févr. 02 18:58:08 olorin-fixe kernel: ata6: SATA link down (SStatus 4
SControl 300)
févr. 02 18:58:08 olorin-fixe kernel: scsi 4:0:0:0: Direct-Access
ATA  HDS722516VLSA80  A6MA PQ: 0 ANSI: 5
févr. 02 18:58:08 olorin-fixe kernel: sd 1:0:0:0: [sda] 976773168
512-byte logical blocks: (500 GB/465 GiB)
févr. 02 18:58:08 olorin-fixe kernel: sd 2:0:0:0: [sdb] 321672960
512-byte logical blocks: (164 GB/153 GiB)
févr. 02 18:58:08 olorin-fixe kernel: sd 2:0:0:0: [sdb] Write Protect is off
févr. 02 18:58:08 olorin-fixe kernel: sd 2:0:0:0: [sdb] Mode Sense: 00
3a 00 00
févr. 02 18:58:08 olorin-fixe kernel: sd 1:0:0:0: [sda] Write Protect is off
févr. 02 18:58:08 olorin-fixe kernel: sd 1:0:0:0: [sda] Mode Sense: 00
3a 00 00
févr. 02 18:58:08 olorin-fixe kernel: sd 2:0:0:0: [sdb] Write cache:
enabled, read cache: enabled, doesn't support DPO or FUA
févr. 02 18:58:08 olorin-fixe kernel: sd 1:0:0:0: [sda] Write cache:
enabled, read cache: enabled, doesn't support DPO or FUA
févr. 02 18:58:08 olorin-fixe kernel: sd 4:0:0:0: [sdc] 321672960
512-byte logical blocks: (164 GB/153 GiB)
févr. 02 18:58:08 olorin-fixe kernel: sd 4:0:0:0: [sdc] Write Protect is off
févr. 02 18:58:08 olorin-fixe kernel: sd 4:0:0:0: [sdc] Mode Sense: 00
3a 00 00
févr. 02 18:58:08 olorin-fixe kernel: sd 4:0:0:0: [sdc] Write cache:
enabled, read cache: enabled, doesn't support DPO or FUA
févr. 02 18:58:08 olorin-fixe kernel:  sda: sda1 sda2 < sda5 >
févr. 02 18:58:08 olorin-fixe kernel: sd 1:0:0:0: [sda] Attached SCSI disk
févr. 02 18:58:08 olorin-fixe kernel: e1000e :00:1f.6 eth0:
registered PHC clock
févr. 02 18:58:08 olorin-fixe kernel: e1000e :00:1f.6 eth0: (PCI
Express:2.5GT/s:Width x1) 30:5a:3a:83:4f:e6
févr. 02 18:58:08 olorin-fixe kernel: e1000e :00:1f.6 eth0: Intel(R)
PRO/1000 Network Connection
févr. 02 18:58:08 olorin-fixe kernel: e1000e :00:1f.6 eth0: MAC: 12,
PHY: 12, PBA No: FF-0FF
févr. 02 18:58:08 olorin-fixe kernel: e1000e :00:1f.6 enp0s31f6:
renamed from eth0
févr. 02 18:58:08 olorin-fixe kernel:  sdb: sdb1
févr. 02 18:58:08 olorin-fixe kernel: sd 2:0:0:0: [sdb] Attached SCSI disk
févr. 02 18:58:08 olorin-fixe kernel:  sdc: sdc1
févr. 02 18:58:08 olorin-fixe kernel: sd 4:0:0:0: [sdc] Attached SCSI disk

[...]

-- L'unité (unit) hdparm.service a terminé son démarrage, avec le
résultat done.
févr. 02 18:58:08 olorin-fixe kernel: md: md0 stopped.
févr. 02 18:58:08 olorin-fixe kernel: md: bind
févr. 02 18:58:08 olorin-fixe kernel: md: bind
févr. 02 18:58:08 olorin-fixe kernel: usb 1-8: new full-speed USB device
number 4 using xhci_hcd
févr. 02 18:58:08 olorin-fixe kernel: md: raid1 personality registered
for level 1
févr. 02 18:58:08 olorin-fixe kernel: md/raid1:md0: active with 2 out of
2 mirrors
févr. 02 18:58:08 olorin-fixe kernel: created bitmap (2 pages) for
device md0
févr. 02 18:58:08 olorin-fixe kernel: md0: bitmap initialized from disk:
read 1 pages, set 0 of 2453 bits
févr. 02 18:58:08 olorin-fixe kernel: md0: detected capacity change from
0 to 164561289216
févr. 02 18:58:08 olorin-fixe mdadm-raid[249]: Assembling MD array
md0...done (started [2/2]).
févr. 02 18:58:08 olorin-fixe mdadm-raid[249]: Generating udev events
for MD arrays...done.
févr. 02 18:58:08 olorin-fixe systemd[1]: Started LSB: MD array assembly.
-- Subject: L'unité (unit) mdadm-raid.service a terminé son démarrage
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- L'unité (unit) mdadm-raid.service a terminé son démarrage, avec le
résultat done.
févr. 02 18:58:08 olorin-fixe systemd[1]: Started MD array monitor.
-- Subject: L'unité (unit) mdmonitor.service a terminé son démarrage
-- Defined-By: systemd

[...]

févr. 02 18:59:38 olorin-fixe systemd[1]:
dev-disk-by\x2duuid-f84fe148\x2da775\x2deac4\x2d76ff\x2d776e5845be39.device:
Job
dev-disk-by\x2duuid-f84fe148\x2da775\x2deac4\x2d76ff\x2d776e5845be39.device/start
timed out.
févr. 02 18:59:38 olorin-fixe systemd[1]: Timed out waiting for device
dev-disk-by\x2duuid-f84fe148\x2da775\x2deac4\x2d76ff\x2d776e5845be39.device.
-- Subject: L'unité (unit)
dev-disk-by\x2duuid-f84fe148\x2da775\x2deac4\x2d76ff\x2d776e5845be39.device
a échoué
-- Defined-By: systemd
-- Support: http://lists.freedesktop.org/mailman/listinfo/systemd-devel
-- 
-- L'unité (unit)

Re: Restauration d'une config LVM / mdadm raid 1

2016-02-02 Thread Pascal Hambourg
Damien TOURDE a écrit :
> Si ça peut donner une piste, je n'ai pas trace de mon raid dans :
> 
> root@olorin-fixe:~# ls -al /dev/disk/by-uuid/

Normal, pas plus qu'il n'y a de trace de sda5 qui contient aussi un PV
LVM et non un système de fichiers ou un swap.

> root@olorin-fixe:~# blkid
(...)
> /dev/md0: TYPE="promise_fasttrack_raid_member"

A mon avis, je le répète, c'est de ce côté qu'il faut chercher.



Re: Restauration d'une config LVM / mdadm raid 1

2016-02-01 Thread Christophe De Natale
Le lundi 01 février 2016 à 20:40 +0100, Damien TOURDE a écrit :
> Bonjour,
> 
> Après un vgck olorin-fixe-storage, je retrouve, avec les commandes
> lvdisplay, vgdisplay et pvdisplay, la mention de l'existence de mon LVM
> "perdu".
> 
Bonsoir,

Du coup, as-tu trouver la raison de cette situation ?

Bonne soirée,

-- 
Christophe De Natale 



Re: Restauration d'une config LVM / mdadm raid 1

2016-02-01 Thread Damien TOURDE
Alors la 3 ème possibilité c'est que pour booter plus vite, j'ai dit à
ma carte mère de ne pas "détecter" les disques à chaque démarrage, et de
choisir toujours le SSD sauf si F8 est appuyé.


Sinon, je ne vois pas.

On 01/02/2016 21:43, Pascal Hambourg wrote:
> Damien TOURDE a écrit :
>> Soit j'ai changé de prise SATA et peut-être donc de contrôleur sur la
>> CM, ce qui aurait pu changer un UUID (il me semble que l'UUID est généré
>> avec les caractéristiques du disque et celles du HW en général).
> Non, les UUID sont générés pseudo-aléatoirement et indépendamment de
> tout identifiant matériel.
>
>> Soit la prise de l'un de mes 2 disques a merdouillé en touchant les
>> câbles. Comme je le disais dans le premier mail, toute la partie "guide
>> en plastique" de la prise SATA d'un des disques est restée coincée dans
>> la partie mâle (câble), et il n'y a plus que les contacts "dans le vide"
>> sur le disque.
>>
>> Mais la 2ème "théorie" me semble plus farfelue car je n'ai pas reçu de
>> mail d'erreur de mdadm.
> Et surtout, le RAID est là pour éviter que cela ait des conséquences
> visibles.
>
>
>



Re: Restauration d'une config LVM / mdadm raid 1

2016-02-01 Thread Damien TOURDE
Bonjour,

Après un vgck olorin-fixe-storage, je retrouve, avec les commandes
lvdisplay, vgdisplay et pvdisplay, la mention de l'existence de mon LVM
"perdu".

En revanche, il est "NOT available", et ça, je ne vois pas pourquoi...
mais ça se corrige.

Voici les résultats des 3 commandes (j'enlève le pv du SSD, qui
fonctionne, de tout ça) :

root@olorin-fixe:~# vgdisplay

  --- Volume group ---
  VG Name   olorin-fixe-storage
  System ID
  Formatlvm2
  Metadata Areas1
  Metadata Sequence No  2
  VG Access read/write
  VG Status resizable
  MAX LV0
  Cur LV1
  Open LV   0
  Max PV0
  Cur PV1
  Act PV1
  VG Size   153,26 GiB
  PE Size   4,00 MiB
  Total PE  39234
  Alloc PE / Size   25600 / 100,00 GiB
  Free  PE / Size   13634 / 53,26 GiB
  VG UUID   o7zoRL-xK1j-2mmo-ZFJi-1wFq-iGft-M9MbyQ

root@olorin-fixe:~# pvdisplay

  --- Physical volume ---
  PV Name   /dev/md0
  VG Name   olorin-fixe-storage
  PV Size   153,26 GiB / not usable 1,88 MiB
  Allocatable   yes
  PE Size   4,00 MiB
  Total PE  39234
  Free PE   13634
  Allocated PE  25600
  PV UUID   b6SEem-WYJK-xcUT-946V-lS0q-Yxic-yFWaxf


root@olorin-fixe:~# lvdisplay

  --- Logical volume ---
  LV Path/dev/olorin-fixe-storage/lvstorage0
  LV Namelvstorage0
  VG Nameolorin-fixe-storage
  LV UUIDUiPmCd-2655-ebnc-24Fk-GLqp-bGkj-MKhu7j
  LV Write Accessread/write
  LV Creation host, time olorin-fixe, 2016-01-17 23:01:59 +0100
  LV Status  NOT available
  LV Size100,00 GiB
  Current LE 25600
  Segments   1
  Allocation inherit
  Read ahead sectors auto



--

Puis un petit coup de :
root@olorin-fixe:~# vgchange -a y olorin-fixe-storage


Et hop ! C'est reparti !




On 31/01/2016 22:47, Damien TOURDE wrote:
> Merci,
>
> Je vais chercher avec cet axe de recherche, je reviens vers la liste en
> cas de trouvaille/échec ;-)
>
> Bonne fin de week-end,
> Damien
>
> On 31/01/2016 22:24, Pascal Hambourg wrote:
>> Damien TOURDE a écrit :
>>> On 31/01/2016 20:14, Pascal Hambourg wrote:
>>>> Damien TOURDE a écrit :
>>>>> C'est un RAID 1 mdadm, avec une unique partition LVM
>>>> Une partition donc un ensemble RAID partitionné, avec une table de
>>>> partition ? Qu'en dit fdisk ou autre ?
>>> root@olorin-fixe:~# fdisk -l /dev/md0
>>> Disque /dev/md0 : 153,3 GiB, 164561289216 octets, 321408768 secteurs
>>> Unités : secteur de 1 × 512 = 512 octets
>>> Taille de secteur (logique / physique) : 512 octets / 512 octets
>>> taille d'E/S (minimale / optimale) : 512 octets / 512 octets
>> Pas de table de partition, donc pas de partition /dev/md0p1. Cas
>> classique, le RAID partitionné est peu utilisé. Je suppose qu'on préfère
>> utiliser LVM par dessus pour la gestion des volumes.
>>
>>>>> J'ai mis dans le fichier de backup l'uuid que me donne blkid
>>>> Quel UUID ?
>>> Celui l'UUID "physique" de la partition (c'est comme ça que je le vois)
>>> qui contient LVM.
>> C'est l'UUID de l'ensemble RAID, qui permet de reconnaître ses membres.
>> Aucun rapport avec LVM.
>>
>>> root@olorin-fixe:~# blkid
>>> /dev/sdc1: UUID="f84fe148-a775-eac4-76ff-776e5845be39"
>>> UUID_SUB="5cacc338-609c-442a-2fcb-cde38f976d58" LABEL="olorin-fixe:0"
>>> TYPE="linux_raid_member" PARTUUID="40988f99-01"
>>> /dev/sdb1: UUID="f84fe148-a775-eac4-76ff-776e5845be39"
>>> UUID_SUB="c522994f-024d-e113-5b30-8c864aad35d8" LABEL="olorin-fixe:0"
>>> TYPE="linux_raid_member" PARTUUID="2600ee9a-01"
>>> /dev/md0: TYPE="promise_fasttrack_raid_member"
>> Ça, ça ne me plaît pas. Apparemment blkid voit un identifiant de membre
>> RAID Promise  dans le contenu de l'ensemble RAID et je suppose que ça
>> l'empêche de voir l'identifiant LVM. Si lvm se base là-dessus pour
>> retrouver ses PV, ça ne marchera pas.
>>
>>> root@olorin-fixe:~# file -s /dev/md0
>>> /dev/md0: LVM2 PV (Linux Logical Volume Manager), UUID:
>>> b6SEem-WYJK-xcUT-946V-lS0q-Yxic-yFWaxf, size: 164561289216
>> Ça c'est plutôt rassurant, l'en-tête LVM est présent.
>>
>> Faudrait voir si on peut forcer lvm à considérer qu'un volume est un PV
>> même si blkid ne le dit pas.
>> Autre piste : chercher l'identifiant RAID Promise parasite et l'effacer.
>> Voir dmraid.
>>
>>
>>
>
>



Re: Restauration d'une config LVM / mdadm raid 1

2016-02-01 Thread Pascal Hambourg
Damien TOURDE a écrit :
> 
> Soit j'ai changé de prise SATA et peut-être donc de contrôleur sur la
> CM, ce qui aurait pu changer un UUID (il me semble que l'UUID est généré
> avec les caractéristiques du disque et celles du HW en général).

Non, les UUID sont générés pseudo-aléatoirement et indépendamment de
tout identifiant matériel.

> Soit la prise de l'un de mes 2 disques a merdouillé en touchant les
> câbles. Comme je le disais dans le premier mail, toute la partie "guide
> en plastique" de la prise SATA d'un des disques est restée coincée dans
> la partie mâle (câble), et il n'y a plus que les contacts "dans le vide"
> sur le disque.
> 
> Mais la 2ème "théorie" me semble plus farfelue car je n'ai pas reçu de
> mail d'erreur de mdadm.

Et surtout, le RAID est là pour éviter que cela ait des conséquences
visibles.



Re: Restauration d'une config LVM / mdadm raid 1

2016-02-01 Thread Damien TOURDE
J'ai 2 pistes :

Soit j'ai changé de prise SATA et peut-être donc de contrôleur sur la
CM, ce qui aurait pu changer un UUID (il me semble que l'UUID est généré
avec les caractéristiques du disque et celles du HW en général).

Soit la prise de l'un de mes 2 disques a merdouillé en touchant les
câbles. Comme je le disais dans le premier mail, toute la partie "guide
en plastique" de la prise SATA d'un des disques est restée coincée dans
la partie mâle (câble), et il n'y a plus que les contacts "dans le vide"
sur le disque.

Mais la 2ème "théorie" me semble plus farfelue car je n'ai pas reçu de
mail d'erreur de mdadm.

On 01/02/2016 20:53, Christophe De Natale wrote:
> Le lundi 01 février 2016 à 20:40 +0100, Damien TOURDE a écrit :
>> Bonjour,
>>
>> Après un vgck olorin-fixe-storage, je retrouve, avec les commandes
>> lvdisplay, vgdisplay et pvdisplay, la mention de l'existence de mon LVM
>> "perdu".
>>
> Bonsoir,
>
> Du coup, as-tu trouver la raison de cette situation ?
>
> Bonne soirée,
>



Re: Restauration d'une config LVM / mdadm raid 1

2016-02-01 Thread Pascal Hambourg
Damien TOURDE a écrit :
> Alors la 3 ème possibilité c'est que pour booter plus vite, j'ai dit à
> ma carte mère de ne pas "détecter" les disques à chaque démarrage, et de
> choisir toujours le SSD sauf si F8 est appuyé.

Je ne vois pas le rapport avec la non détection du PV qui est contenu
dans un ensemble RAID qui est lui-même parfaitement détecté.



Restauration d'une config LVM / mdadm raid 1

2016-01-31 Thread Damien TOURDE
Bonjour,

Suite à une manip physique (changement de pâte thermique), mon raid ne
veut plus se monter. J'ai peut-être touché un câble, mais en tout cas
l'UEFI de la CM reconnait bien 2 disques actifs (+ SSD système, pas de
soucis avec celui là), Debian aussi.

Les 2 disques sont assez vieux (dont un dont le connecteur s'est
"intégré" dans le câble... dur à expliquer), mais SMART ne me donne rien
d'alarmant.


Depuis, pour démarrer, je suis obliger de virer le RAID de fstab et il
n'y a pas moyen de monter le disque.

C'est un RAID 1 mdadm, avec une unique partition LVM, et autant les
disques sont bien reconnus, autant LVM n'arrive pas à me trouver de PV,
VG, ni de LV.


J'ai essayé avec vgcfgrestore, mais il me dit qu'il n'arrive pas à
trouver l'uuid correspondant au fichier.
J'ai mis dans le fichier de backup l'uuid que me donne blkid, rien à
faire non plus... Je commence à sécher un peu là...

Voici ce que je peux vous dire :

root@olorin-fixe:~# mdadm --detail /dev/md0
/dev/md0:
Version : 1.2
  Creation Time : Sun Jan 17 22:18:35 2016
 Raid Level : raid1
 Array Size : 160704384 (153.26 GiB 164.56 GB)
  Used Dev Size : 160704384 (153.26 GiB 164.56 GB)
   Raid Devices : 2
  Total Devices : 2
Persistence : Superblock is persistent

  Intent Bitmap : Internal

Update Time : Thu Jan 21 01:06:17 2016
  State : clean
 Active Devices : 2
Working Devices : 2
 Failed Devices : 0
  Spare Devices : 0

   Name : olorin-fixe:0  (local to host olorin-fixe)
   UUID : f84fe148:a775eac4:76ff776e:5845be39
 Events : 764

Number   Major   Minor   RaidDevice State
   0   8   170  active sync   /dev/sdb1
   1   8   331  active sync   /dev/sdc1


---

root@olorin-fixe:~# mdadm --examine --scan /dev/sdb1 /dev/sdc1
ARRAY /dev/md/0  metadata=1.2 UUID=f84fe148:a775eac4:76ff776e:5845be39
name=olorin-fixe:0

---

root@olorin-fixe:~# e2fsck -f /dev/md0
e2fsck 1.42.13 (17-May-2015)
ext2fs_open2: Numéro magique invalide dans le super-bloc
e2fsck : Superbloc invalide, tentons d'utiliser les blocs de sauvetage...
e2fsck: Numéro magique invalide dans le super-bloc lors de la tentative
d'ouverture de /dev/md0

Le superbloc n'a pu être lu ou ne contient pas un système de fichiers
ext2/ext3/ext4 correct. Si le périphérique est valide et qu'il contient
réellement
un système de fichiers ext2/ext3/ext4 (et non pas de type swap, ufs ou
autre),
alors le superbloc est corrompu, et vous pourriez tenter d'exécuter
e2fsck avec un autre superbloc :
e2fsck -b 8193 <périphérique>
 ou
e2fsck -b 32768 <périphérique>


---

root@olorin-fixe:~# lvscan -a
  ACTIVE'/dev/olorin-fixe-vg/root' [20,00 GiB] inherit
  ACTIVE'/dev/olorin-fixe-vg/swap_1' [15,76 GiB] inherit
  ACTIVE'/dev/olorin-fixe-vg/home' [320,00 GiB] inherit

root@olorin-fixe:~# vgscan
  Reading all physical volumes.  This may take a while...
  Found volume group "olorin-fixe-vg" using metadata type lvm2

root@olorin-fixe:~# pvscan
  PV /dev/sda5   VG olorin-fixe-vg   lvm2 [465,52 GiB / 109,76 GiB free]
  Total: 1 [465,52 GiB] / in use: 1 [465,52 GiB] / in no VG: 0 [0   ]


---> Le LVM sur le RAID c'est VG olorin-fixe-storage  / LV storage0,
là il reconnait le SSD système uniquement.



Re: Restauration d'une config LVM / mdadm raid 1

2016-01-31 Thread Damien TOURDE
Bonjour,

Merci de ta réponse :

On 31/01/2016 20:14, Pascal Hambourg wrote:
> Damien TOURDE a écrit :
>> C'est un RAID 1 mdadm, avec une unique partition LVM
> Une partition donc un ensemble RAID partitionné, avec une table de
> partition ? Qu'en dit fdisk ou autre ?
root@olorin-fixe:~# fdisk -l /dev/md0
Disque /dev/md0 : 153,3 GiB, 164561289216 octets, 321408768 secteurs
Unités : secteur de 1 × 512 = 512 octets
Taille de secteur (logique / physique) : 512 octets / 512 octets
taille d'E/S (minimale / optimale) : 512 octets / 512 octets


root@olorin-fixe:~# fdisk -l /dev/sdb
Disque /dev/sdb : 153,4 GiB, 16469620 octets, 321672960 secteurs
Unités : secteur de 1 × 512 = 512 octets
Taille de secteur (logique / physique) : 512 octets / 512 octets
taille d'E/S (minimale / optimale) : 512 octets / 512 octets
Type d'étiquette de disque : dos
Identifiant de disque : 0x2600ee9a

Périphérique Amorçage Début   Fin  Secteurs Taille Id Type
/dev/sdb1  2048 321672959 321670912 153,4G fd RAID Linux
autodétecté
root@olorin-fixe:~# fdisk -l /dev/sdc
Disque /dev/sdc : 153,4 GiB, 16469620 octets, 321672960 secteurs
Unités : secteur de 1 × 512 = 512 octets
Taille de secteur (logique / physique) : 512 octets / 512 octets
taille d'E/S (minimale / optimale) : 512 octets / 512 octets
Type d'étiquette de disque : dos
Identifiant de disque : 0x40988f99

Périphérique Amorçage Début   Fin  Secteurs Taille Id Type
/dev/sdc1  2048 321672959 321670912 153,4G fd RAID Linux
autodétecté
root@olorin-fixe:~#


>> J'ai mis dans le fichier de backup l'uuid que me donne blkid
> Quel UUID ?
Celui l'UUID "physique" de la partition (c'est comme ça que je le vois)
qui contient LVM.


root@olorin-fixe:~# blkid
/dev/sdc1: UUID="f84fe148-a775-eac4-76ff-776e5845be39"
UUID_SUB="5cacc338-609c-442a-2fcb-cde38f976d58" LABEL="olorin-fixe:0"
TYPE="linux_raid_member" PARTUUID="40988f99-01"
/dev/sdb1: UUID="f84fe148-a775-eac4-76ff-776e5845be39"
UUID_SUB="c522994f-024d-e113-5b30-8c864aad35d8" LABEL="olorin-fixe:0"
TYPE="linux_raid_member" PARTUUID="2600ee9a-01"
/dev/md0: TYPE="promise_fasttrack_raid_member"

Et je l'ai mis dans le fichier de config LVM de ce PV (je copie en bas
du mail le fichier de config).

Avec pour résultat :

root@olorin-fixe:~# vgcfgrestore -f
/etc/lvm/backup/olorin-fixe-storage.test olorin-fixe-storage
  Couldn't find device with uuid f84fe1-48a7-75ea-c476-ff77-6e58-45be39.
  PV unknown device missing from cache
  Format-specific setup for unknown device failed
  Restore failed.

Avec celui par défaut :

root@olorin-fixe:~# vgcfgrestore -f
/etc/lvm/backup/olorin-fixe-storage.bck olorin-fixe-storage
  Couldn't find device with uuid b6SEem-WYJK-xcUT-946V-lS0q-Yxic-yFWaxf.
  PV unknown device missing from cache
  Format-specific setup for unknown device failed
  Restore failed.

>
>> root@olorin-fixe:~# e2fsck -f /dev/md0
> Pourquoi diable exécuter e2fsck sur ce qui est censé contenir une table
> de partition ou un PV LVM ?
> Qu'en dit plutôt file -s /dev/md0 ?
Pour e2fsck, c'était un peu en désespoir de cause, histoire de voir si
quelque chose ressortait...

root@olorin-fixe:~# file -s /dev/md0
/dev/md0: LVM2 PV (Linux Logical Volume Manager), UUID:
b6SEem-WYJK-xcUT-946V-lS0q-Yxic-yFWaxf, size: 164561289216


-> Je me suis un peu "perdu" dans les UUID, en gros le "b6SE..." je le
retrouve dans /etc/lvm/backup/olorin-fixe-storage et dans la sortie de
file -s /dev/md0

Et le "f84fe" je le retrouve dans la sortie de blkid, et
/etc/mdadm/mdadm.conf à la ligne :

ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=olorin-fixe:0
UUID=f84fe148:a775eac4:76ff776e:5845be39
   devices=/dev/sdb1,/dev/sdc1


--

Le fichier de config : /etc/lvm/backup/olorin-fixe-storage

# Generated by LVM2 version 2.02.138(2) (2015-12-14): Sun Jan 17
23:02:00 2016

contents = "Text Format Volume Group"
version = 1

description = "Created *after* executing 'lvcreate -L 100G
olorin-fixe-storage -n lvstorage0'"

creation_host = "olorin-fixe"# Linux olorin-fixe 4.3.0-1-amd64 #1
SMP Debian 4.3.3-5 (2016-01-04) x86_64
creation_time = 1453068120# Sun Jan 17 23:02:00 2016

olorin-fixe-storage {
id = "o7zoRL-xK1j-2mmo-ZFJi-1wFq-iGft-M9MbyQ"
seqno = 2
format = "lvm2"# informational
status = ["RESIZEABLE", "READ", "WRITE"]
flags = []
extent_size = 8192# 4 Megabytes
max_lv = 0
max_pv = 0
metadata_copies = 0

physical_volumes {

pv0 {
id = "b6SEem-WYJK-xcUT-946V-lS0q-Yxic-yFWaxf"
device = "/dev/md0"# Hint only

status = ["ALLOCATABLE"]
flags = []
dev_

Re: Restauration d'une config LVM / mdadm raid 1

2016-01-31 Thread Pascal Hambourg
Damien TOURDE a écrit :
> 
> C'est un RAID 1 mdadm, avec une unique partition LVM

Une partition donc un ensemble RAID partitionné, avec une table de
partition ? Qu'en dit fdisk ou autre ?

> J'ai mis dans le fichier de backup l'uuid que me donne blkid

Quel UUID ?

> root@olorin-fixe:~# e2fsck -f /dev/md0

Pourquoi diable exécuter e2fsck sur ce qui est censé contenir une table
de partition ou un PV LVM ?
Qu'en dit plutôt file -s /dev/md0 ?



Re: Restauration d'une config LVM / mdadm raid 1

2016-01-31 Thread Damien TOURDE
Merci,

Je vais chercher avec cet axe de recherche, je reviens vers la liste en
cas de trouvaille/échec ;-)

Bonne fin de week-end,
Damien

On 31/01/2016 22:24, Pascal Hambourg wrote:
> Damien TOURDE a écrit :
>>
>> On 31/01/2016 20:14, Pascal Hambourg wrote:
>>> Damien TOURDE a écrit :
>>>> C'est un RAID 1 mdadm, avec une unique partition LVM
>>> Une partition donc un ensemble RAID partitionné, avec une table de
>>> partition ? Qu'en dit fdisk ou autre ?
>> root@olorin-fixe:~# fdisk -l /dev/md0
>> Disque /dev/md0 : 153,3 GiB, 164561289216 octets, 321408768 secteurs
>> Unités : secteur de 1 × 512 = 512 octets
>> Taille de secteur (logique / physique) : 512 octets / 512 octets
>> taille d'E/S (minimale / optimale) : 512 octets / 512 octets
> Pas de table de partition, donc pas de partition /dev/md0p1. Cas
> classique, le RAID partitionné est peu utilisé. Je suppose qu'on préfère
> utiliser LVM par dessus pour la gestion des volumes.
>
>>>> J'ai mis dans le fichier de backup l'uuid que me donne blkid
>>> Quel UUID ?
>> Celui l'UUID "physique" de la partition (c'est comme ça que je le vois)
>> qui contient LVM.
> C'est l'UUID de l'ensemble RAID, qui permet de reconnaître ses membres.
> Aucun rapport avec LVM.
>
>> root@olorin-fixe:~# blkid
>> /dev/sdc1: UUID="f84fe148-a775-eac4-76ff-776e5845be39"
>> UUID_SUB="5cacc338-609c-442a-2fcb-cde38f976d58" LABEL="olorin-fixe:0"
>> TYPE="linux_raid_member" PARTUUID="40988f99-01"
>> /dev/sdb1: UUID="f84fe148-a775-eac4-76ff-776e5845be39"
>> UUID_SUB="c522994f-024d-e113-5b30-8c864aad35d8" LABEL="olorin-fixe:0"
>> TYPE="linux_raid_member" PARTUUID="2600ee9a-01"
>> /dev/md0: TYPE="promise_fasttrack_raid_member"
> Ça, ça ne me plaît pas. Apparemment blkid voit un identifiant de membre
> RAID Promise  dans le contenu de l'ensemble RAID et je suppose que ça
> l'empêche de voir l'identifiant LVM. Si lvm se base là-dessus pour
> retrouver ses PV, ça ne marchera pas.
>
>> root@olorin-fixe:~# file -s /dev/md0
>> /dev/md0: LVM2 PV (Linux Logical Volume Manager), UUID:
>> b6SEem-WYJK-xcUT-946V-lS0q-Yxic-yFWaxf, size: 164561289216
> Ça c'est plutôt rassurant, l'en-tête LVM est présent.
>
> Faudrait voir si on peut forcer lvm à considérer qu'un volume est un PV
> même si blkid ne le dit pas.
> Autre piste : chercher l'identifiant RAID Promise parasite et l'effacer.
> Voir dmraid.
>
>
>



Re: Restauration d'une config LVM / mdadm raid 1

2016-01-31 Thread Pascal Hambourg
Damien TOURDE a écrit :
> 
> 
> On 31/01/2016 20:14, Pascal Hambourg wrote:
>> Damien TOURDE a écrit :
>>> C'est un RAID 1 mdadm, avec une unique partition LVM
>> Une partition donc un ensemble RAID partitionné, avec une table de
>> partition ? Qu'en dit fdisk ou autre ?
> root@olorin-fixe:~# fdisk -l /dev/md0
> Disque /dev/md0 : 153,3 GiB, 164561289216 octets, 321408768 secteurs
> Unités : secteur de 1 × 512 = 512 octets
> Taille de secteur (logique / physique) : 512 octets / 512 octets
> taille d'E/S (minimale / optimale) : 512 octets / 512 octets

Pas de table de partition, donc pas de partition /dev/md0p1. Cas
classique, le RAID partitionné est peu utilisé. Je suppose qu'on préfère
utiliser LVM par dessus pour la gestion des volumes.

>>> J'ai mis dans le fichier de backup l'uuid que me donne blkid
>> Quel UUID ?
> Celui l'UUID "physique" de la partition (c'est comme ça que je le vois)
> qui contient LVM.

C'est l'UUID de l'ensemble RAID, qui permet de reconnaître ses membres.
Aucun rapport avec LVM.

> root@olorin-fixe:~# blkid
> /dev/sdc1: UUID="f84fe148-a775-eac4-76ff-776e5845be39"
> UUID_SUB="5cacc338-609c-442a-2fcb-cde38f976d58" LABEL="olorin-fixe:0"
> TYPE="linux_raid_member" PARTUUID="40988f99-01"
> /dev/sdb1: UUID="f84fe148-a775-eac4-76ff-776e5845be39"
> UUID_SUB="c522994f-024d-e113-5b30-8c864aad35d8" LABEL="olorin-fixe:0"
> TYPE="linux_raid_member" PARTUUID="2600ee9a-01"
> /dev/md0: TYPE="promise_fasttrack_raid_member"

Ça, ça ne me plaît pas. Apparemment blkid voit un identifiant de membre
RAID Promise  dans le contenu de l'ensemble RAID et je suppose que ça
l'empêche de voir l'identifiant LVM. Si lvm se base là-dessus pour
retrouver ses PV, ça ne marchera pas.

> root@olorin-fixe:~# file -s /dev/md0
> /dev/md0: LVM2 PV (Linux Logical Volume Manager), UUID:
> b6SEem-WYJK-xcUT-946V-lS0q-Yxic-yFWaxf, size: 164561289216

Ça c'est plutôt rassurant, l'en-tête LVM est présent.

Faudrait voir si on peut forcer lvm à considérer qu'un volume est un PV
même si blkid ne le dit pas.
Autre piste : chercher l'identifiant RAID Promise parasite et l'effacer.
Voir dmraid.



Re: Re: Recovering from Debian Wheezy RAID-1

2015-12-23 Thread Narūnas
> I don't see Debian doing anything wrong. fdisk showing a 2.3T
> partition I am assuming comes on your Arch Linux disk and is a result
> of it using the wrong block size. I'm not sure if this is due to the
> use of a USB adapter.
> 
> mdadm -E /dev/sdb should fail because /dev/sdb is not a RAID device.
> mdadm -E /dev/sdb1 should work.

Hi,

I tried it on debian Jessie (different machine) and ended up with the same 
results as on Archlinux - shuffled partition table, no md super block... This 
however didn't explained how then debian can boot at all when connected 
directly. So I bought new USB adapter and that was it, end of the story, debian 
didn't do anything wrong, all this fuss was caused by the malfunctioning USB 
adapter.

Thank you for your time.

Narūnas



Re: Recovering from Debian Wheezy RAID-1

2015-12-22 Thread Gary Dale

On 22/12/15 04:44 PM, Narunas Krasauskas wrote:
I have this HDD (WD3200BPVT) which used to be part of the RAID-1 array 
which has been created with Debian (Wheezy) installer, then AES 
encrypted, then split into LVM volumes.



Personalities : [raid1]
md1 : active raid1 sda2[0] sdb2[1]
  311462720 blocks super 1.2 [2/2] [UU]
md0 : active raid1 sda1[0] sdb1[1]
  975296 blocks super 1.2 [2/2] [UU]



- sdb1 (1GB real size) here is a member of the system boot partition 
raid (md0)
- sdb2 (~300 GB) was dedicated to everything else, hence encrypted, 
split into LVM volumes (md1).


Currently I'm on Archlinux, having one of the RAID-1 array disks 
connected via USB adapter.


# uname -a
Linux agn-arch 4.2.5-1-ARCH #1 SMP PREEMPT Tue Oct 27 08:13:28 CET
2015 x86_64 GNU/Linux



My first problem is that Debian installer somehow shuffled partition 
table in the way, that my current system cannot recognize correct 
partition sizes. Here I find 2.3 TB partition on the 320 GB HDD. How 
or why I'm hoping you debian users can tell me.


# fdisk -l /dev/sdb
Disk /dev/sdb: 298.1 GiB, 320072933376 bytes, 78142806 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x0009577b


Device Boot Start   End   Sectors  Size Id Type
/dev/sdb1  * 2048   1953791   1951744  7.5G fd Linux raid autodetect
/dev/sdb2 1953792 625141759 623187968  2.3T fd Linux raid autodetect


Nevertheless it seems that kernel sees more or less correct partition 
sizes:


# cat /proc/partitions | grep sdb
   8   16  312571224 sdb
   8   17  7806976 sdb1
   8   18  304756056 sdb2



My second problem - mdadm cannot detect md super block:

# mdadm -V
mdadm - v3.3.4 - 3rd August 2015


# for v in 0 0.90 1 1.0 1.1 1.2 default ddf imsm; do mdadm -E -e
$v /dev/sdb; done
mdadm: No super block found on /dev/sdb (Expected magic a92b4efc,
got )
mdadm: No super block found on /dev/sdb (Expected magic a92b4efc,
got )
mdadm: No super block found on /dev/sdb (Expected magic a92b4efc,
got )
mdadm: No super block found on /dev/sdb (Expected magic a92b4efc,
got 009063eb)
mdadm: No super block found on /dev/sdb (Expected magic a92b4efc,
got 7208ec45)
mdadm: No super block found on /dev/sdb (Expected magic a92b4efc,
got )
mdadm: No super block found on /dev/sdb (Expected magic a92b4efc,
got 009063eb)
mdadm: No super block found on /dev/sdb (Expected magic a92b4efc,
got 7208ec45)
mdadm: No super block found on /dev/sdb (Expected magic a92b4efc,
got 7208ec45)
mdadm: Cannot read anchor block on /dev/sdb: Invalid argument
mdadm: /dev/sdb is not attached to Intel(R) RAID controller.
mdadm: Cannot read anchor block on /dev/sdb: Invalid argument
mdadm: Failed to load all information sections on /dev/sdb

# for v in 0 0.90 1 1.0 1.1 1.2 default ddf imsm; do mdadm -E -e
$v /dev/sdb1; done
mdadm: No super block found on /dev/sdb1 (Expected magic a92b4efc,
got a2a14843)
mdadm: No super block found on /dev/sdb1 (Expected magic a92b4efc,
got a2a14843)
mdadm: No super block found on /dev/sdb1 (Expected magic a92b4efc,
got 6cc72f52)
mdadm: No super block found on /dev/sdb1 (Expected magic a92b4efc,
got )
mdadm: No super block found on /dev/sdb1 (Expected magic a92b4efc,
got )
mdadm: No super block found on /dev/sdb1 (Expected magic a92b4efc,
got 6cc72f52)
mdadm: No super block found on /dev/sdb1 (Expected magic a92b4efc,
got )
mdadm: No super block found on /dev/sdb1 (Expected magic a92b4efc,
got )
mdadm: No super block found on /dev/sdb1 (Expected magic a92b4efc,
got )

# for v in 0 0.90 1 1.0 1.1 1.2 default ddf imsm; do mdadm -E -e
$v /dev/sdb2; done
mdadm: No super block found on /dev/sdb2 (Expected magic a92b4efc,
got )
mdadm: No super block found on /dev/sdb2 (Expected magic a92b4efc,
got )
mdadm: No super block found on /dev/sdb2 (Expected magic a92b4efc,
got )
mdadm: No super block found on /dev/sdb2 (Expected magic a92b4efc,
got 6b3a399e)
mdadm: No super block found on /dev/sdb2 (Expected magic a92b4efc,
got b3afa73a)
mdadm: No super block found on /dev/sdb2 (Expected magic a92b4efc,
got )
mdadm: No super block found on /dev/sdb2 (Expected magic a92b4efc,
got 6b3a399e)
mdadm: No super block found on /dev/sdb2 (Expected magic a92b4efc,
got b3afa73a)
mdadm: No super block found on /dev/sdb2 (Expected magic a92b4efc,
got b3afa73a)


Funny enough metadata is still here, because I had to reassemble my 
old box to recover my files. That's how things look like from

Recovering from Debian Wheezy RAID-1

2015-12-22 Thread Narunas Krasauskas
I have this HDD (WD3200BPVT) which used to be part of the RAID-1 array
which has been created with Debian (Wheezy) installer, then AES encrypted,
then split into LVM volumes.


Personalities : [raid1]
md1 : active raid1 sda2[0] sdb2[1]
  311462720 blocks super 1.2 [2/2] [UU]

md0 : active raid1 sda1[0] sdb1[1]
  975296 blocks super 1.2 [2/2] [UU]



- sdb1 (1GB real size) here is a member of the system boot partition raid
(md0)
- sdb2 (~300 GB) was dedicated to everything else, hence encrypted, split
into LVM volumes (md1).

Currently I'm on Archlinux, having one of the RAID-1 array disks connected
via USB adapter.

# uname -a
Linux agn-arch 4.2.5-1-ARCH #1 SMP PREEMPT Tue Oct 27 08:13:28 CET 2015
x86_64 GNU/Linux



My first problem is that Debian installer somehow shuffled partition table
in the way, that my current system cannot recognize correct partition
sizes. Here I find 2.3 TB partition on the 320 GB HDD. How or why I'm
hoping you debian users can tell me.

# fdisk -l /dev/sdb
Disk /dev/sdb: 298.1 GiB, 320072933376 bytes, 78142806 sectors
Units: sectors of 1 * 4096 = 4096 bytes
Sector size (logical/physical): 4096 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: dos
Disk identifier: 0x0009577b


Device Boot   Start   End   Sectors  Size Id Type
/dev/sdb1  *   2048   1953791   1951744  7.5G fd Linux raid autodetect
/dev/sdb2   1953792 625141759 623187968  2.3T fd Linux raid autodetect


Nevertheless it seems that kernel sees more or less correct partition sizes:

# cat /proc/partitions | grep sdb
   8   16  312571224 sdb
   8   177806976 sdb1
   8   18  304756056 sdb2



My second problem - mdadm cannot detect md super block:

# mdadm -V
mdadm - v3.3.4 - 3rd August 2015


# for v in 0 0.90 1 1.0 1.1 1.2 default ddf imsm; do mdadm -E -e $v
/dev/sdb; done
mdadm: No super block found on /dev/sdb (Expected magic a92b4efc, got
)
mdadm: No super block found on /dev/sdb (Expected magic a92b4efc, got
)
mdadm: No super block found on /dev/sdb (Expected magic a92b4efc, got
)
mdadm: No super block found on /dev/sdb (Expected magic a92b4efc, got
009063eb)
mdadm: No super block found on /dev/sdb (Expected magic a92b4efc, got
7208ec45)
mdadm: No super block found on /dev/sdb (Expected magic a92b4efc, got
)
mdadm: No super block found on /dev/sdb (Expected magic a92b4efc, got
009063eb)
mdadm: No super block found on /dev/sdb (Expected magic a92b4efc, got
7208ec45)
mdadm: No super block found on /dev/sdb (Expected magic a92b4efc, got
7208ec45)
mdadm: Cannot read anchor block on /dev/sdb: Invalid argument
mdadm: /dev/sdb is not attached to Intel(R) RAID controller.
mdadm: Cannot read anchor block on /dev/sdb: Invalid argument
mdadm: Failed to load all information sections on /dev/sdb

# for v in 0 0.90 1 1.0 1.1 1.2 default ddf imsm; do mdadm -E -e $v
/dev/sdb1; done
mdadm: No super block found on /dev/sdb1 (Expected magic a92b4efc, got
a2a14843)
mdadm: No super block found on /dev/sdb1 (Expected magic a92b4efc, got
a2a14843)
mdadm: No super block found on /dev/sdb1 (Expected magic a92b4efc, got
6cc72f52)
mdadm: No super block found on /dev/sdb1 (Expected magic a92b4efc, got
)
mdadm: No super block found on /dev/sdb1 (Expected magic a92b4efc, got
)
mdadm: No super block found on /dev/sdb1 (Expected magic a92b4efc, got
6cc72f52)
mdadm: No super block found on /dev/sdb1 (Expected magic a92b4efc, got
)
mdadm: No super block found on /dev/sdb1 (Expected magic a92b4efc, got
)
mdadm: No super block found on /dev/sdb1 (Expected magic a92b4efc, got
)

# for v in 0 0.90 1 1.0 1.1 1.2 default ddf imsm; do mdadm -E -e $v
/dev/sdb2; done
mdadm: No super block found on /dev/sdb2 (Expected magic a92b4efc, got
)
mdadm: No super block found on /dev/sdb2 (Expected magic a92b4efc, got
)
mdadm: No super block found on /dev/sdb2 (Expected magic a92b4efc, got
)
mdadm: No super block found on /dev/sdb2 (Expected magic a92b4efc, got
6b3a399e)
mdadm: No super block found on /dev/sdb2 (Expected magic a92b4efc, got
b3afa73a)
mdadm: No super block found on /dev/sdb2 (Expected magic a92b4efc, got
)
mdadm: No super block found on /dev/sdb2 (Expected magic a92b4efc, got
6b3a399e)
mdadm: No super block found on /dev/sdb2 (Expected magic a92b4efc, got
b3afa73a)
mdadm: No super block found on /dev/sdb2 (Expected magic a92b4efc, got
b3afa73a)


Funny enough metadata is still here, because I had to reassemble my old box
to recover my files. That's how things look like from the booted Wheezy:

/dev/sdb1:
  Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
 Array UUID : 2bb13374:a6ecf587:e36a71f4:1f5423f8
   Name : debox:0  (local to host debox)
  Creation Time : Sat May 31 20:25:34 2014
 Raid Level : raid1
   Raid Devices : 2

 Avail Dev Size : 1950720 (952.66 MiB 998.77 MB)
 Array Size : 975296 (952.60 MiB 998.70 MB)
  Used Dev Size

Re: RAID 1 System Installation Question

2015-09-20 Thread Tim McDonough

Thank you! I will try this procedure this week.

Tim

On 9/18/2015 5:04 PM, linuxthefish wrote:

Tim,

>From what I remember it's best to set it up when you installing the
system, then you can install the bootloader to /boot in RAID 1.

https://blog.sleeplessbeastie.eu/2013/10/04/how-to-configure-software-raid1-during-installation-process/
is what I followed.

Thanks,
Edmund

On 18 September 2015 at 22:11, Tim McDonough <tmcdono...@gmail.com> wrote:

I've used Debian Linux for a number of years but up until now always with a
single hard drive.

I want to build a new system that will have a pair of 1TB drives configured
as a RAID-1 mirror. In reading the mdadm Wiki the discussion begins with
installing mdadm.

My goal is to have a system where if either drive fails things will a)
continue to run from a single drive, and b) be able to replace the failed
drive with a new one of the same size and have the system rebuild into a
mirrored array again.

It is not clear to me how I need to begin the installation sequence.

My question: Do I install Debian to a single drive and will installing mdadm
then allow me to add the second disk and setup RAID? Do I need to configure
each drive in some way before installing Debian and then mdadm?

If there is an up-to-date "how to" that describes this please just point me
there. I have not found anything that seems to start at the point where I
just have bare metal.

Regards,

Tim







RAID 1 System Installation Question

2015-09-18 Thread Tim McDonough
I've used Debian Linux for a number of years but up until now always 
with a single hard drive.


I want to build a new system that will have a pair of 1TB drives 
configured as a RAID-1 mirror. In reading the mdadm Wiki the discussion 
begins with installing mdadm.


My goal is to have a system where if either drive fails things will a) 
continue to run from a single drive, and b) be able to replace the 
failed drive with a new one of the same size and have the system rebuild 
into a mirrored array again.


It is not clear to me how I need to begin the installation sequence.

My question: Do I install Debian to a single drive and will installing 
mdadm then allow me to add the second disk and setup RAID? Do I need to 
configure each drive in some way before installing Debian and then mdadm?


If there is an up-to-date "how to" that describes this please just point 
me there. I have not found anything that seems to start at the point 
where I just have bare metal.


Regards,

Tim



Re: RAID 1 System Installation Question

2015-09-18 Thread linuxthefish
Tim,

>From what I remember it's best to set it up when you installing the
system, then you can install the bootloader to /boot in RAID 1.

https://blog.sleeplessbeastie.eu/2013/10/04/how-to-configure-software-raid1-during-installation-process/
is what I followed.

Thanks,
Edmund

On 18 September 2015 at 22:11, Tim McDonough <tmcdono...@gmail.com> wrote:
> I've used Debian Linux for a number of years but up until now always with a
> single hard drive.
>
> I want to build a new system that will have a pair of 1TB drives configured
> as a RAID-1 mirror. In reading the mdadm Wiki the discussion begins with
> installing mdadm.
>
> My goal is to have a system where if either drive fails things will a)
> continue to run from a single drive, and b) be able to replace the failed
> drive with a new one of the same size and have the system rebuild into a
> mirrored array again.
>
> It is not clear to me how I need to begin the installation sequence.
>
> My question: Do I install Debian to a single drive and will installing mdadm
> then allow me to add the second disk and setup RAID? Do I need to configure
> each drive in some way before installing Debian and then mdadm?
>
> If there is an up-to-date "how to" that describes this please just point me
> there. I have not found anything that seems to start at the point where I
> just have bare metal.
>
> Regards,
>
> Tim
>



Re: RAID 1 architecture Partition

2015-08-13 Thread Pascal Hambourg
Clément Breuil a écrit :
 je vient de me rendre compte que c'est spécifique a l'installateur de jessie

J'avait testé avec l'installateur de Jessie.

 donc tu met en raid les deux disques
 
 tu crée le multidisque raid1 sur les deux disques
 
 puis tu fait  partitionnement assisté puis
 Utiliser le disque entier et tu choisi le périphérique RAID fraichement crée

Ah, voilà. J'avais choisi le partitionnement manuel. Je n'utilise jamais
 le partitionnement assisté car il ne me permet jamais d'atteindre le
résultat souhaité. La seule fois où je lui ai fait confiance, je l'ai
ensuite regretté (/boot et /usr trop petits notamment).



Re: RAID 1 architecture Partition

2015-08-13 Thread Pascal Hambourg
Sylvain L. Sauvage a écrit :
 Le mercredi 12 août 2015, 14:36:53 Pascal Hambourg a écrit :
 J'ai essayé, mais je ne vois pas de moyen de créer des
 partitions dans un ensemble RAID fraîchement créé avec
 l'interface de l'installateur.
 
 Et si tu crées ton RAID à partir des disques (/dev/sdX) et non à 
 partir de partitions (/dev/sdX1) ?

Je ne vois pas ce que cela changerait vis-à-vis du périphérique /dev/mdX
créé.



Re: RAID 1 architecture Partition

2015-08-13 Thread Sylvain L. Sauvage
Le jeudi 13 août 2015, 10:15:41 Pascal Hambourg a écrit :
 Sylvain L. Sauvage a écrit :
[…]
  Et si tu crées ton RAID à partir des disques (/dev/sdX) et
  non à partir de partitions (/dev/sdX1) ?
 
 Je ne vois pas ce que cela changerait vis-à-vis du
 périphérique /dev/mdX créé.

  Tu manques d’imagination ;o)
   Ce n’est pas le périphérique qui change, c’est l’état global 
qui change : l’installateur pourrait se dire « il n’y a pas de 
partition du tout » et donc autoriser à en créer dans le RAID 
ou, autrement dit, dans l’autre cas, « il y a des partitions, si 
l’utilisateur a fait du RAID dedans, je ne vais pas 
l’embrouiller avec l’option de créer des partitions dans le 
RAID ».
  La récursivité, c’est rigolo mais ça embrouille.

-- 
 Sylvain Sauvage



Re: RAID 1 architecture Partition

2015-08-12 Thread Pascal Hambourg
Christophe a écrit :
 
 Le 08/08/2015 14:41, Clément Breuil a écrit :
 les partition md se nomme md0p1 pour / et md0p2  pour le swap
 
 J'ignorais qu'il pouvait y avoir des mdXpY ...

Les ensembles RAID partitionnables existent depuis le noyau 2.6.28, soit
un bout de temps. Mais cette fonctionnalité semble peu utilisée car LVM
était déjà utilisé pour palier cette carence, et reste plus souple qu'un
partitionnement classique.

 Ce genre de configuration peut se faire depuis l'installer debian ?

Je ne pense pas car je n'ai jamais vu d'option pour créer une table de
partition sur un ensemble RAID. Ou bien avec les outils en ligne de
commande disponibles dans le shell de l'installateur (cfdisk, fdisk,
parted).



Re: RAID 1 architecture Partition

2015-08-12 Thread Pascal Hambourg
Clément Breuil a écrit :
 Oui depuis l'installation debian c'est possible 
 
 Création d'un raid multidisque puis 2 partitions dedans 

Comment as-tu procédé en détail ?
J'ai essayé, mais je ne vois pas de moyen de créer des partitions dans
un ensemble RAID fraîchement créé avec l'interface de l'installateur.
Cette option n'est disponible que si l'ensemble RAID a déjà été
partitionné avec un autre outil.



Re: RAID 1 architecture Partition

2015-08-12 Thread Sylvain L. Sauvage
Le mercredi 12 août 2015, 14:36:53 Pascal Hambourg a écrit :
[…]
 Comment as-tu procédé en détail ?
 J'ai essayé, mais je ne vois pas de moyen de créer des
 partitions dans un ensemble RAID fraîchement créé avec
 l'interface de l'installateur.

Et si tu crées ton RAID à partir des disques (/dev/sdX) et non à 
partir de partitions (/dev/sdX1) ?

-- 
 Sylvain Sauvage



  1   2   3   4   5   6   7   8   9   10   >