Re: [gentoo-user] Raid web page

2021-10-01 Thread antlists

On 01/10/2021 22:21, mad.scientist.at.la...@tutanota.com wrote:

Where is Wol's raid page?  I'm about to build a raid box fro NAS.


https://raid.wiki.kernel.org/index.php/Linux_Raid

Cheers,
Wol



Re: [gentoo-user] RAID: new drive on aac raid

2020-10-07 Thread Stefan G. Weichinger
Am 07.10.20 um 10:40 schrieb Stefan G. Weichinger:
> Am 06.10.20 um 15:08 schrieb k...@aspodata.se:
>> Stefan G. Weichinger:
>>> I know the model: ICP5165BR
>>
>> https://ask.adaptec.com/app/answers/detail/a_id/17414/~/support-for-sata-and-sas-disk-drives-with-a-size-of-2tb-or-greater
>>
>> says it is supported up to 8TB drives using firmware v5.2.0 Build 17343 **
>>
>> ** Firmware v5.2.0 Build 17343 for the ICP5045BL, ICP5085BL, ICP5805BL,
>>ICP5125BR, and ICP5165BR: Adaptec is providing minimally tested
>>firmware packages. Please contact Adaptec by PMC Technical Support
>>to obtain these firmware files. Have the TSID or serial number of
>>the product at hand when contacting support.
> 
> Yes, I saw that as well.
> 
> I managed to flash the "stable" firmware via arcconf, so I now have
> 15753 ...
> 
> I submitted a ticket there.

They only offer support for money.

I think I tried enough. Waiting for the smaller drive now.

Thanks for your help, all.




Re: [gentoo-user] RAID: new drive on aac raid

2020-10-07 Thread Stefan G. Weichinger
Am 06.10.20 um 15:08 schrieb k...@aspodata.se:
> Stefan G. Weichinger:
>> I know the model: ICP5165BR
> 
> https://ask.adaptec.com/app/answers/detail/a_id/17414/~/support-for-sata-and-sas-disk-drives-with-a-size-of-2tb-or-greater
> 
> says it is supported up to 8TB drives using firmware v5.2.0 Build 17343 **
> 
> ** Firmware v5.2.0 Build 17343 for the ICP5045BL, ICP5085BL, ICP5805BL,
>ICP5125BR, and ICP5165BR: Adaptec is providing minimally tested
>firmware packages. Please contact Adaptec by PMC Technical Support
>to obtain these firmware files. Have the TSID or serial number of
>the product at hand when contacting support.

Yes, I saw that as well.

I managed to flash the "stable" firmware via arcconf, so I now have
15753 ...

I submitted a ticket there.

In the meantime we already ordered a smaller drive as well.




Re: [gentoo-user] RAID: new drive on aac raid

2020-10-06 Thread karl
Stefan G. Weichinger:
> Am 06.10.20 um 11:52 schrieb k...@aspodata.se:
> > Stefan G. Weichinger:
> >> Am 05.10.20 um 21:32 schrieb k...@aspodata.se:
> > ...
> >> What do you think, is 2 TB maybe too big for the controller?
> > 
>  0a:0e.0 RAID bus controller: Adaptec AAC-RAID
> > 
> > This doesn't really tells us which controller it is, try with
> > 
> >  lspci -s 0a:0e.0 -nn
> 
> I know the model: ICP5165BR

https://ask.adaptec.com/app/answers/detail/a_id/17414/~/support-for-sata-and-sas-disk-drives-with-a-size-of-2tb-or-greater

says it is supported up to 8TB drives using firmware v5.2.0 Build 17343 **

** Firmware v5.2.0 Build 17343 for the ICP5045BL, ICP5085BL, ICP5805BL,
   ICP5125BR, and ICP5165BR: Adaptec is providing minimally tested
   firmware packages. Please contact Adaptec by PMC Technical Support
   to obtain these firmware files. Have the TSID or serial number of
   the product at hand when contacting support.

Regards,
/Karl Hammar




Re: [gentoo-user] RAID: new drive on aac raid

2020-10-06 Thread antlists

On 05/10/2020 17:01, Stefan G. Weichinger wrote:

Am 05.10.20 um 17:19 schrieb Stefan G. Weichinger:


So my issue seems to be: non-working arcconf doesn't let me "enable"
that one drive.


Some kind of progress.

Searched for more and older releases of arcconf, found Version 1.2 that
doesn't crash here.

This lets me view the physical device(s), but the new disk is marked as
"Failed".

Does it think the disk is a negative size? I looked, your Tosh is 2TB, 
and the other I looked at was 700GB. The raid website says a lot of 
older controllers can't cope with 2TB or larger disks ...


Actually, the device information seems to confirm that - Total Size 0 MB ???


# ./arcconf GETCONFIG 1 PD  | more
Controllers found: 1
--
Physical Device information
--
   Device #0
  Device is a Hard drive
  State  : Failed
  Block Size : Unknown
  Supported  : Yes
  Transfer Speed : Failed
  Reported Channel,Device(T:L)   : 0,0(0:0)
  Reported Location  : Connector 0, Device 0
  Vendor : TOSHIBA
  Model  : MG04SCA20EE
  Firmware   : 0104
  Serial number  : 30A0A00UFX2B
  World-wide name: 539A08327484
  Total Size : 0 MB
  Write Cache: Unknown
  FRU: None
  S.M.A.R.T. : No
  S.M.A.R.T. warnings: 0
  SSD: No






Re: [gentoo-user] RAID: new drive on aac raid

2020-10-06 Thread Stefan G. Weichinger
Am 06.10.20 um 11:52 schrieb k...@aspodata.se:

> Some guesses:
> 
>  https://wiki.debian.org/LinuxRaidForAdmins#aacraid
>  says that it requires libstd++5
> 
>  arcconf might fork and exec, one could try with strace and try to
>  see what happens
> 
>  one could, if the old suse dist. is available in a subdir, to chroot
>  to that sudir, and try arcconf from there

Hmm, yes.

Currently I think it's the ancient firmware ... and maybe arcconf also
crashes when it's not matching some minimum version of firmware.

As mentioned in my other reply, I wait for that formatting to finish and
then I want to try a firmware upgrade (to brand new 2008 ;-) ).



Re: [gentoo-user] RAID: new drive on aac raid

2020-10-06 Thread Stefan G. Weichinger
Am 06.10.20 um 11:52 schrieb k...@aspodata.se:
> Stefan G. Weichinger:
>> Am 05.10.20 um 21:32 schrieb k...@aspodata.se:
> ...
>> What do you think, is 2 TB maybe too big for the controller?
> 
 0a:0e.0 RAID bus controller: Adaptec AAC-RAID
> 
> This doesn't really tells us which controller it is, try with
> 
>  lspci -s 0a:0e.0 -nn

I know the model: ICP5165BR

with ancient Firmware.

Currently I am in the Controller's BIOS or however you call that. I try
to initialize and/or format the drive to make it available.

The format runs for hours already.

I prepared a FreeDOS iso with the latest firmware to flash ... after the
formatting is done.

Maybe then the compatibility is better. Or the controller becomes a
paperweight.

I have the impression that the controller hasn't yet fully recognized
the disk, it is displayed differently in the array and disk menus of the
firmware UI.

>>> What do sg_verify /dev/sg11 return ?
>> nothing
> 
> Well, you have to check the return status: echo $?

Maybe later, right now not possible (as mentioned above).



Re: [gentoo-user] RAID: new drive on aac raid

2020-10-06 Thread karl
Stefan G. Weichinger:
> Am 05.10.20 um 16:38 schrieb k...@aspodata.se:
...
> But no luck with any version of arcconf so far. Unpacked several zips,
> tried 2 releases, 32 and 64 bits .. all crash.
> 
> > Just a poke in the dark, does ldd report all libs found, as in:
> > $ ldd /bin/ls
> > linux-vdso.so.1 (0x7ffcbab4c000)
> > libc.so.6 => /lib64/libc.so.6 (0x7fece3ad5000)
> > /lib64/ld-linux-x86-64.so.2 (0x7fece3d1c000)
> > $
> 
> Yeah, that works.

Some guesses:

 https://wiki.debian.org/LinuxRaidForAdmins#aacraid
 says that it requires libstd++5

 arcconf might fork and exec, one could try with strace and try to
 see what happens

 one could, if the old suse dist. is available in a subdir, to chroot
 to that sudir, and try arcconf from there

Regards,
/Karl Hammar




Re: [gentoo-user] RAID: new drive on aac raid

2020-10-06 Thread karl
Stefan G. Weichinger:
> Am 05.10.20 um 21:32 schrieb k...@aspodata.se:
...
> What do you think, is 2 TB maybe too big for the controller?

>>> 0a:0e.0 RAID bus controller: Adaptec AAC-RAID

This doesn't really tells us which controller it is, try with

 lspci -s 0a:0e.0 -nn

In the kernel source one can then find drivers/scsi/aacraid/linit.h,
and since lsscsi says ICP, my guess it is one of theese below.
When we know which one, one can try to check for issues.

/*
 * Because of the way Linux names scsi devices, the order in this table has
 * become important.  Check for on-board Raid first, add-in cards second.
 *
 * Note: The last field is used to index into aac_drivers below.
 */
static const struct pci_device_id aac_pci_tbl[] = {
...
{ 0x9005, 0x0286, 0x9005, 0x029e, 0, 0, 25 }, /* ICP9024RO (Lancer) */
{ 0x9005, 0x0286, 0x9005, 0x029f, 0, 0, 26 }, /* ICP9014RO (Lancer) */
{ 0x9005, 0x0286, 0x9005, 0x02a0, 0, 0, 27 }, /* ICP9047MA (Lancer) */
{ 0x9005, 0x0286, 0x9005, 0x02a1, 0, 0, 28 }, /* ICP9087MA (Lancer) */
{ 0x9005, 0x0286, 0x9005, 0x02a3, 0, 0, 29 }, /* ICP5445AU 
(Hurricane44) */
{ 0x9005, 0x0285, 0x9005, 0x02a4, 0, 0, 30 }, /* ICP9085LI (Marauder-X) 
*/
{ 0x9005, 0x0285, 0x9005, 0x02a5, 0, 0, 31 }, /* ICP5085BR (Marauder-E) 
*/
{ 0x9005, 0x0286, 0x9005, 0x02a6, 0, 0, 32 }, /* ICP9067MA (Intruder-6) 
*/
...
};
MODULE_DEVICE_TABLE(pci, aac_pci_tbl);

/*
 * dmb - For now we add the number of channels to this structure.
 * In the future we should add a fib that reports the number of channels
 * for the card.  At that time we can remove the channels from here
 */
static struct aac_driver_ident aac_drivers[] = {
...
{ aac_rkt_init, "aacraid",  "ICP ", "ICP9024RO   ", 2 }, /* 
ICP9024RO (Lancer) */
{ aac_rkt_init, "aacraid",  "ICP ", "ICP9014RO   ", 1 }, /* 
ICP9014RO (Lancer) */
{ aac_rkt_init, "aacraid",  "ICP ", "ICP9047MA   ", 1 }, /* 
ICP9047MA (Lancer) */
{ aac_rkt_init, "aacraid",  "ICP ", "ICP9087MA   ", 1 }, /* 
ICP9087MA (Lancer) */
{ aac_rkt_init, "aacraid",  "ICP ", "ICP5445AU   ", 1 }, /* 
ICP5445AU (Hurricane44) */
{ aac_rx_init, "aacraid",  "ICP ", "ICP9085LI   ", 1 }, /* 
ICP9085LI (Marauder-X) */
{ aac_rx_init, "aacraid",  "ICP ", "ICP5085BR   ", 1 }, /* 
ICP5085BR (Marauder-E) */
{ aac_rkt_init, "aacraid",  "ICP ", "ICP9067MA   ", 1 }, /* 
ICP9067MA (Intruder-6) */
...
};

> > What do sg_verify /dev/sg11 return ?
> nothing

Well, you have to check the return status: echo $?

> > Can you do sg_dd if=foo of=/dev/sg11 count=10 and get it back with
> > sg_dd if=/dev/sg11 of=bar count=10, with cmp foo bar; echo $? 
> > returning 0 ?
> Yes, that works.

Then it seems the drive itself is ok.

Regards,
/Karl Hammar




Re: [gentoo-user] RAID: new drive on aac raid

2020-10-06 Thread Stefan G. Weichinger
Am 05.10.20 um 21:32 schrieb k...@aspodata.se:

> What if you put it on the 53c1030 card, can you do that, at least to 
> verify the disk ?

I am 600kms away from that server and the people I could send to the
basement there aren't very competent in these things. I am afraid that
won't work out well.

I only told them to remove and re-insert the new drive. Maybe some
contact issue.

What do you think, is 2 TB maybe too big for the controller?

> What do sg_verify /dev/sg11 return ?

nothing

> Can you do sg_dd if=foo of=/dev/sg11 count=10 and get it back with
> sg_dd if=/dev/sg11 of=bar count=10, with cmp foo bar; echo $? 
> returning 0 ?

Yes, that works.



Re: [gentoo-user] RAID: new drive on aac raid

2020-10-05 Thread karl
Stefan G. Weichinger:
...
> Searched for more and older releases of arcconf, found Version 1.2 that
> doesn't crash here.
> 
> This lets me view the physical device(s), but the new disk is marked as
> "Failed".
...

What if you put it on the 53c1030 card, can you do that, at least to 
verify the disk ?

What do sg_verify /dev/sg11 return ?

Can you do sg_dd if=foo of=/dev/sg11 count=10 and get it back with
sg_dd if=/dev/sg11 of=bar count=10, with cmp foo bar; echo $? 
returning 0 ?

Regards,
/Karl Hammar





Re: [gentoo-user] RAID: new drive on aac raid

2020-10-05 Thread Stefan G. Weichinger
Am 05.10.20 um 17:19 schrieb Stefan G. Weichinger:

> So my issue seems to be: non-working arcconf doesn't let me "enable"
> that one drive.

Some kind of progress.

Searched for more and older releases of arcconf, found Version 1.2 that
doesn't crash here.

This lets me view the physical device(s), but the new disk is marked as
"Failed".

# ./arcconf GETCONFIG 1 PD  | more
Controllers found: 1
--
Physical Device information
--
  Device #0
 Device is a Hard drive
 State  : Failed
 Block Size : Unknown
 Supported  : Yes
 Transfer Speed : Failed
 Reported Channel,Device(T:L)   : 0,0(0:0)
 Reported Location  : Connector 0, Device 0
 Vendor : TOSHIBA
 Model  : MG04SCA20EE
 Firmware   : 0104
 Serial number  : 30A0A00UFX2B
 World-wide name: 539A08327484
 Total Size : 0 MB
 Write Cache: Unknown
 FRU: None
 S.M.A.R.T. : No
 S.M.A.R.T. warnings: 0
 SSD: No



Tried a rescan and a clearing of status ... no change.

Maybe the disk is too big for that controller.

Creating a LD also fails:

# ./arcconf CREATE 1 LOGICALDRIVE MAX volume 0,0
Controllers found: 1
A selected device is not available for use.

Command aborted.

juno /usr/portage/distfiles/linux_x64 # ./arcconf CREATE 1 LOGICALDRIVE
MAX volume 0 0
Controllers found: 1
A selected device is not available for use.

Command aborted.

... annoying and frustrating



Re: [gentoo-user] RAID: new drive on aac raid

2020-10-05 Thread Stefan G. Weichinger
Am 05.10.20 um 16:57 schrieb Rich Freeman:

> If you're doing software RAID or just individual disks, then you're
> probably going to go into the controller and basically configure that
> disk as standalone, or as a 1-disk "RAID".  That will make it appear
> to the OS, and then you can do whatever you want with it at the OS
> level (stick a filesystem on it, put it in a RAID/lvm, whatever).
> 
> I find this sort of thing really annoying.

Same here! ;-)

> I prefer HBAs that just do
> IT mode or equivalent - acting as a dumb HBA and passing all the
> drives through to the OS.  It isn't that it doesn't work - it is just
> that you're now married to that HBA card vendor and if anything
> happens to the card you have to replace it with something compatible
> and reconfigure it using their software/etc, or else all your data is
> unreadable.  Even if you have backups it isn't something you want to
> just have to deal with if you're talking about a lot of data.

Yep.

So my issue seems to be: non-working arcconf doesn't let me "enable"
that one drive.

I *might* consider booting up the older Suse OS (still somewhere as
well) via the flaky old Java-KVM and try things there.

The server is ~600kms away, so my possibilities with Live-USB-sticks etc
are limited right now.





Re: [gentoo-user] RAID: new drive on aac raid

2020-10-05 Thread Stefan G. Weichinger
Am 05.10.20 um 16:38 schrieb k...@aspodata.se:

>  And theese on the aac, since they have the same scsi host, and I guess
>  that scsi ch.0 is for the configured drives and ch.1 for the raw drives:
>> [1:0:1:0]diskICP  SAS2 V1.0  /dev/sda
>> [1:0:2:0]diskICP  Device 2 V1.0  /dev/sdb
>> [1:0:3:0]diskICP  Device 3 V1.0  /dev/sdc
>> [1:0:4:0]diskICP  Device 4 V1.0  /dev/sdd
>> [1:0:5:0]diskICP  Device 5 V1.0  /dev/sde
>> [1:0:6:0]diskICP  Device 6 V1.0  /dev/sdf
>> [1:0:7:0]diskICP  Device 7 V1.0  /dev/sdg
>> [1:0:8:0]diskICP  Device 8 V1.0  /dev/sdh
>> [1:0:9:0]diskICP  Device 9 V1.0  /dev/sdi
>> [1:1:0:0]diskTOSHIBA  MG04SCA20EE  0104  

Thanks for your analysis and pointers!

> Perhaps theese links will help:
>  
> https://www.cyberciti.biz/faq/linux-checking-sas-sata-disks-behind-adaptec-raid-controllers/
>  http://updates.aslab.com/doc/disk-controller/aacraid_guide.pdf
>  https://hwraid.le-vert.net/wiki/Adaptec

Somehow.

I get smartctl output for that disk:
# smartctl -d scsi --all /dev/sg11
smartctl 7.0 2018-12-30 r4883 [x86_64-linux-4.14.83-gentoo-smp] (local
build)
Copyright (C) 2002-18, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Vendor:   TOSHIBA
Product:  MG04SCA20EE
Revision: 0104
Compliance:   SPC-4
User Capacity:2.000.398.934.016 bytes [2,00 TB]
Logical block size:   512 bytes
Physical block size:  4096 bytes
Rotation Rate:7200 rpm
Form Factor:  3.5 inches
Logical Unit id:  0x539a08327485
Serial number:30A0A00UFX2B
Device type:  disk
Transport protocol:   SAS (SPL-3)
Local Time is:Mon Oct  5 18:54:44 2020 CEST
SMART support is: Available - device has SMART capability.
SMART support is: Disabled
Temperature Warning:  Disabled or Not Supported

But no luck with any version of arcconf so far. Unpacked several zips,
tried 2 releases, 32 and 64 bits .. all crash.

> Just a poke in the dark, does ldd report all libs found, as in:
> $ ldd /bin/ls
> linux-vdso.so.1 (0x7ffcbab4c000)
> libc.so.6 => /lib64/libc.so.6 (0x7fece3ad5000)
> /lib64/ld-linux-x86-64.so.2 (0x7fece3d1c000)
> $

Yeah, that works.



Re: [gentoo-user] RAID: new drive on aac raid

2020-10-05 Thread Rich Freeman
On Mon, Oct 5, 2020 at 10:38 AM  wrote:
>
> Stefan G. Weichinger:
> > On an older server the customer replaced a SAS drive.
> >
> > I see it as /dev/sg11, but not yes as /dev/sdX, it is not visible in "lsblk"
>
> Perhaps theese links will help:
>  
> https://www.cyberciti.biz/faq/linux-checking-sas-sata-disks-behind-adaptec-raid-controllers/
>  http://updates.aslab.com/doc/disk-controller/aacraid_guide.pdf
>  https://hwraid.le-vert.net/wiki/Adaptec
>

I don't know the details of any of these controllers, but you have the
gist of it.  The RAID controller is abstracting the individual drives
and so the OS doesn't see them.  You need to do at least some of the
configuration through the controller.  That usually requires
vendor-specific software, which is often available for linux, and
which in some cases is packaged for Gentoo.

There are a lot of ways to do something like this.  If you're doing
hardware RAID you'd just replace/etc the disk in the raid (I'm
actually surprised in this case that just swapping the drive in the
same slot didn't already do this), and the hardware RAID will rebuild
it, and the OS doesn't see anything at all.  You might need the
utility, but that is about it.

If you're doing software RAID or just individual disks, then you're
probably going to go into the controller and basically configure that
disk as standalone, or as a 1-disk "RAID".  That will make it appear
to the OS, and then you can do whatever you want with it at the OS
level (stick a filesystem on it, put it in a RAID/lvm, whatever).

I find this sort of thing really annoying.  I prefer HBAs that just do
IT mode or equivalent - acting as a dumb HBA and passing all the
drives through to the OS.  It isn't that it doesn't work - it is just
that you're now married to that HBA card vendor and if anything
happens to the card you have to replace it with something compatible
and reconfigure it using their software/etc, or else all your data is
unreadable.  Even if you have backups it isn't something you want to
just have to deal with if you're talking about a lot of data.

-- 
Rich



Re: [gentoo-user] RAID: new drive on aac raid

2020-10-05 Thread karl
Stefan G. Weichinger:
> On an older server the customer replaced a SAS drive.
> 
> I see it as /dev/sg11, but not yes as /dev/sdX, it is not visible in "lsblk"
...

Not that I think it will help you much, but there is sys-apps/sg3_utils:

# lsscsi 
[0:0:0:0]diskATA  TOSHIBA MG03ACA3 FL1A  /dev/sda 
[1:0:0:0]diskATA  TOSHIBA MG03ACA3 FL1A  /dev/sdb 
[5:0:0:0]cd/dvd  ASUS DRW-24F1ST   a   1.00  /dev/scd0
# sg_map -x -i
/dev/sg0  0 0 0 0  0  /dev/sda  ATA   TOSHIBA MG03ACA3  FL1A
/dev/sg1  1 0 0 0  0  /dev/sdb  ATA   TOSHIBA MG03ACA3  FL1A
/dev/sg2  5 0 0 0  5  /dev/scd0  ASUS  DRW-24F1ST   a1.00
# sg_inq /dev/sg1
standard INQUIRY:
  PQual=0  Device_type=0  RMB=0  LU_CONG=0  version=0x05  [SPC-3]
  [AERC=0]  [TrmTsk=0]  NormACA=0  HiSUP=0  Resp_data_format=2
  SCCS=0  ACC=0  TPGS=0  3PC=0  Protect=0  [BQue=0]
  EncServ=0  MultiP=0  [MChngr=0]  [ACKREQQ=0]  Addr16=0
  [RelAdr=0]  WBus16=0  Sync=0  [Linked=0]  [TranDis=0]  CmdQue=1
  [SPI: Clocking=0x0  QAS=0  IUS=0]
length=96 (0x60)   Peripheral device type: disk
 Vendor identification: ATA 
 Product identification: TOSHIBA MG03ACA3
 Product revision level: FL1A
 Unit serial number:14UAKBPDF

You have theese two:
> 07:01.0 SCSI storage controller: LSI Logic / Symbios Logic 53c1030 PCI-X
> 0a:0e.0 RAID bus controller: Adaptec AAC-RAID

 I guess that the tape drive and mediachanger is on the 53c1030:
> [0:0:1:0]tapeHP   Ultrium 4-SCSI   B12H  /dev/st0
> [0:0:1:1]mediumx OVERLAND NEO Series   0510  /dev/sch0

 And theese on the aac, since they have the same scsi host, and I guess
 that scsi ch.0 is for the configured drives and ch.1 for the raw drives:
> [1:0:1:0]diskICP  SAS2 V1.0  /dev/sda
> [1:0:2:0]diskICP  Device 2 V1.0  /dev/sdb
> [1:0:3:0]diskICP  Device 3 V1.0  /dev/sdc
> [1:0:4:0]diskICP  Device 4 V1.0  /dev/sdd
> [1:0:5:0]diskICP  Device 5 V1.0  /dev/sde
> [1:0:6:0]diskICP  Device 6 V1.0  /dev/sdf
> [1:0:7:0]diskICP  Device 7 V1.0  /dev/sdg
> [1:0:8:0]diskICP  Device 8 V1.0  /dev/sdh
> [1:0:9:0]diskICP  Device 9 V1.0  /dev/sdi
> [1:1:0:0]diskTOSHIBA  MG04SCA20EE  0104  
> [1:1:1:0]diskSEAGATE  ST373455SS   0002  -
> [1:1:3:0]diskWDC  WD7500AZEX-00RKK 0A80  -
> [1:1:4:0]diskWDC  WD7500AZEX-00RKK 0A80  -
> [1:1:5:0]diskWDC  WD7500AZEX-00RKK 0A80  -
> [1:1:6:0]diskWDC  WD7500AZEX-00BN5 1A01  -
> [1:1:7:0]diskWDC  WD7500AZEX-00BN5 1A01  -
> [1:1:8:0]diskWDC  WD7500AZEX-00RKK 0A80  -
> [1:1:9:0]diskST375052 8AS  CC44  -
> [1:1:10:0]   diskWDC  WD7500AZEX-00BN5 1A01  -
> [1:1:11:0]   diskWDC  WD7500AZEX-00BN5 1A01  -

Perhaps theese links will help:
 
https://www.cyberciti.biz/faq/linux-checking-sas-sata-disks-behind-adaptec-raid-controllers/
 http://updates.aslab.com/doc/disk-controller/aacraid_guide.pdf
 https://hwraid.le-vert.net/wiki/Adaptec

Just a poke in the dark, does ldd report all libs found, as in:
$ ldd /bin/ls
linux-vdso.so.1 (0x7ffcbab4c000)
libc.so.6 => /lib64/libc.so.6 (0x7fece3ad5000)
/lib64/ld-linux-x86-64.so.2 (0x7fece3d1c000)
$

Regards,
/Karl Hammar





Re: [gentoo-user] RAID-1 on secondary disks how?

2019-01-29 Thread Rich Freeman
On Tue, Jan 29, 2019 at 7:36 PM Grant Taylor
 wrote:
>
> That assumes that there is a boot loader.  There wasn't one with the old
> Slackware boot & root disks.
>

Linux no longer supports direct booting from the MBR.

arch/x86/boot/header.S
bugger_off_msg:
.ascii  "Use a boot loader.\r\n"
.ascii  "\n"
.ascii  "Remove disk and press any key to reboot...\r\n"
.byte   0

-- 
Rich



Re: [gentoo-user] RAID-1 on secondary disks how?

2019-01-29 Thread karl
Peter Humphrey:
...
>  In my case I 
> haven't needed an initramfs so far, and now I see I still don't need one - 
> why 
> add complication? Having set the kernel option to assemble raid devices at 
> boot time, now that /dev/md0 has been created I find it ready to go as soon 
> as 
> I boot up and log in. No jiggery-pokery needed.
> 
> A reminder: this is not the boot device.

Works on a boot device as long as you use old 0.90 format superblock on 
your md devices. You can boot with autodetect or specify how to 
assemble root raid fs on command line like:

# cat /proc/cmdline 
BOOT_IMAGE=18/3 ro root=902 md=2,/dev/sda2,/dev/sdb2 raid=noautodetect
# ls -l /dev/md2 
brw-rw 1 root disk 9, 2 Apr 12  2015 /dev/md2
#

Regards,
/Karl Hammar

---
Aspö Data
Lilla Aspö 148
S-742 94 Östhammar
Sweden





Re: [gentoo-user] RAID-1 on secondary disks how?

2019-01-29 Thread Grant Taylor

On 01/29/2019 02:17 PM, Neil Bothwick wrote:
AFAIR the initramfs code is built into the kernel, not as an option. The 
reason given for using a cpio archive is that it is simple and available 
in the kernel. The kernel itself has an initramfs built into it which is 
executed automatically, it's just that this initramfs is usually empty. 
So loading an initramfs is trivial for any kernel, and loading anything 
after that is handled by the initramfs.


That may be the case now.

But when I started messing with Linux nearly 20 years ago that was not 
the case.  The kernel and the initramfs / initrd were two distinct 
things.  I remember having to calculate where the kernel stopped on a 
floppy disk so that you could start writing the initramfs / initrd image 
after the kernel.


Or for fun, modify the flag (bit?) to tell the kernel to prompt to to 
swap disks for the initramfs / initrd.


Both of which needed to tell the kernel where the initramfs / initrd 
started on the medium.


That was a LONG time ago.  More than a few things have changed since then.

That only leaves loading the initramfs file from disk, which is handled 
by the bootloader along with the kernel file.


That assumes that there is a boot loader.  There wasn't one with the old 
Slackware boot & root disks.




--
Grant. . . .
unix || die



Re: [gentoo-user] RAID-1 on secondary disks how?

2019-01-29 Thread Peter Humphrey
On Tuesday, 29 January 2019 20:37:31 GMT Wol's lists wrote:
> On 28/01/2019 16:56, Peter Humphrey wrote:
> > I must be missing something, in spite of following the wiki instructions.
> > Can someone help an old duffer out?
> 
> Gentoo wiki, or kernel raid wiki?

Gentoo wiki.

It's fascinating to see what a hornets' nest I've stirred up. In my case I 
haven't needed an initramfs so far, and now I see I still don't need one - why 
add complication? Having set the kernel option to assemble raid devices at 
boot time, now that /dev/md0 has been created I find it ready to go as soon as 
I boot up and log in. No jiggery-pokery needed.

A reminder: this is not the boot device.

-- 
Regards,
Peter.






Re: [gentoo-user] RAID-1 on secondary disks how?

2019-01-29 Thread Neil Bothwick
On Tue, 29 Jan 2019 13:37:43 -0700, Grant Taylor wrote:

> > An initramfs typically loads kernel modules, assuming there are any
> > that need to be loaded.  
> 
> And where is it going to load them from if said kernel doesn't support 
> initrds or loop back devices or the archive or file system type that
> the initramfs is using?

AFAIR the initramfs code is built into the kernel, not as an option. The
reason given for using a cpio archive is that it is simple and available
in the kernel. The kernel itself has an initramfs built into it which is
executed automatically, it's just that this initramfs is usually empty.
So loading an initramfs is trivial for any kernel, and loading anything
after that is handled by the initramfs.

That only leaves loading the initramfs file from disk, which is handled
by the bootloader along with the kernel file.


-- 
Neil Bothwick

TEXAS VIRUS: Makes sure that it's bigger than any other file.


pgpjYJRwaLpQH.pgp
Description: OpenPGP digital signature


Re: [gentoo-user] RAID-1 on secondary disks how?

2019-01-29 Thread Alan Mackenzie
On Tue, Jan 29, 2019 at 20:58:37 +, Wol's lists wrote:
> On 29/01/2019 19:41, Grant Taylor wrote:
> > The kernel /must/ have (at least) the minimum drivers (and dependencies) 
> > to be able to boot strap.  It doesn't matter if it's boot strapping an 
> > initramfs or otherwise.

> > All of these issues about lack of a driver are avoided by having the 
> > driver statically compiled into the kernel.

> I'm not sure to what extent it's true of 64-bit hardware, but one of the 
> big problems with non-module kernels is actually being able to load them 
> into the available ram ... something to do with BG's "640K should be 
> enough for anyone".

Uh?  I've never had problems with my hand-configured kernels fitting
into RAM, regardless of whether it's a 32-bit or 64-bit processor.

A modular kernel is a workaround for binary kernels needing to support
any and all hardware devices.  If you load drivers for all these devices
at the same time, you may well run out of RAM (or have done so in the
relatively recent past).

If you're configuring the kernel for a specific machine, I can't see how
you could run out of RAM, unless there's too little of it to run
GNU/Linux anyway.

> Cheers,
> Wol

-- 
Alan Mackenzie (Nuremberg, Germany).



Re: [gentoo-user] RAID-1 on secondary disks how?

2019-01-29 Thread Wol's lists

On 29/01/2019 19:41, Grant Taylor wrote:
The kernel /must/ have (at least) the minimum drivers (and dependencies) 
to be able to boot strap.  It doesn't matter if it's boot strapping an 
initramfs or otherwise.


All of these issues about lack of a driver are avoided by having the 
driver statically compiled into the kernel.


I'm not sure to what extent it's true of 64-bit hardware, but one of the 
big problems with non-module kernels is actually being able to load them 
into the available ram ... something to do with BG's "640K should be 
enough for anyone".


Cheers,
Wol



Re: [gentoo-user] RAID-1 on secondary disks how?

2019-01-29 Thread Rich Freeman
On Tue, Jan 29, 2019 at 3:37 PM Grant Taylor
 wrote:
>
> On 01/29/2019 01:26 PM, Rich Freeman wrote:
> > Uh, an initramfs typically does not exec a second kernel.  I guess it
> > could, in which case that kernel would need its own initramfs to get
> > around to mounting its root filesystem.  Presumably at some point you'd
> > want to have your system stop kexecing kernels and start actually doing
> > something useful...
>
> Which ever type of initramfs you use, the kernel that you are running
> MUST have support for the minimum number of devices and file systems it
> needs to be able to load other things.

Certainly, which is what I've been saying.  And what you've been saying as well.

> Hence the difference between
> built-in and modular drivers that I'm talking about.

The kernel doesn't care where the driver came from.  If you want to
put root on ext4 then the kernel needs ext4 support.  It doesn't
matter if it is built-into the kernel or in a module stored in the
initramfs.

> And where is it going to load them from if said kernel doesn't support
> initrds or loop back devices or the archive or file system type that the
> initramfs is using?

Why would you use an initramfs with a kernel incapable of using an initramfs?

>
> > Sure, and those are in the kernel that runs the initramfs.
>
> Not if they aren't compiled in.
>

Sure, in that case they would be in modules.  I don't really get your
point here.

> I feel like this (sub)thread has become circular and unproductive.

Clearly...

-- 
Rich



Re: [gentoo-user] RAID-1 on secondary disks how?

2019-01-29 Thread Grant Taylor

On 01/29/2019 01:26 PM, Rich Freeman wrote:
Uh, an initramfs typically does not exec a second kernel.  I guess it 
could, in which case that kernel would need its own initramfs to get 
around to mounting its root filesystem.  Presumably at some point you'd 
want to have your system stop kexecing kernels and start actually doing 
something useful...


Which ever type of initramfs you use, the kernel that you are running 
MUST have support for the minimum number of devices and file systems it 
needs to be able to load other things.  Hence the difference between 
built-in and modular drivers that I'm talking about.


An initramfs typically loads kernel modules, assuming there are any that 
need to be loaded.


And where is it going to load them from if said kernel doesn't support 
initrds or loop back devices or the archive or file system type that the 
initramfs is using?



Sure, and those are in the kernel that runs the initramfs.


Not if they aren't compiled in.

I feel like this (sub)thread has become circular and unproductive.



--
Grant. . . .
unix || die



Re: [gentoo-user] RAID-1 on secondary disks how?

2019-01-29 Thread Wol's lists

On 28/01/2019 16:56, Peter Humphrey wrote:

I must be missing something, in spite of following the wiki instructions. Can
someone help an old duffer out?


Gentoo wiki, or kernel raid wiki?

Cheers,
Wol



Re: [gentoo-user] RAID-1 on secondary disks how?

2019-01-29 Thread Wol's lists

On 29/01/2019 19:01, Rich Freeman wrote:

It would surely be a bug if the kernel were capable of manipulating RAIDs, but 
not of initialising
and mounting them.



Linus would disagree with you there, and has said as much publicly.
He does not consider initialization to be the responsibility of kernel
space long-term, and prefers that this happen in user-space.

Some of the lvm and mdadm support remains for legacy reasons, but you
probably won't see initialization of newer volume/etc managers
supported directly in the kernel.

Actually, the kernel isn't capable of manipulating raid. The reason you 
need raid 0.9 (or 1.0, actually) is that all the raid metadata is at the 
*end* of the partition, so the system boots off the file-system 
completely ignorant that it's actually a raid. The raid then gets 
assembled and when root is remounted rw, it's the raid version that's 
mounted not a single-disk filesystem.


In other words, if you have sda1 and sdb1 raided together as your root, 
the system will boot off a read-only sda1 before switching to a 
read-write md1.


Cheers,
Wol



Re: [gentoo-user] RAID-1 on secondary disks how?

2019-01-29 Thread Rich Freeman
On Tue, Jan 29, 2019 at 3:15 PM Grant Taylor
 wrote:
>
> On 01/29/2019 01:08 PM, Rich Freeman wrote:
>
> You seem to be focusing on the second kernel that the initramfs execs.
>

Uh, an initramfs typically does not exec a second kernel.  I guess it
could, in which case that kernel would need its own initramfs to get
around to mounting its root filesystem.  Presumably at some point
you'd want to have your system stop kexecing kernels and start
actually doing something useful...

If an initramfs did kexec a second kernel then that initramfs would
basically be wiped out along with anything the first kernel did.
Unless you're talking about something like Xen a linux kernel
generally takes complete control over the system.

An initramfs typically loads kernel modules, assuming there are any
that need to be loaded.  They're loaded by the kernel that was run by
grub, and they stay around after the new root/init is pivoted.

> The initramfs won't be able to do crap if it doesn't have the device and
> file system drives necessary for the initramfs kernel & init scripts to
> boot.

Sure, and those are in the kernel that runs the initramfs.

Remember, it is the kernel that runs the initramfs, not the other way
around, though the initramfs might modprobe some modules just as you
might do 5 minutes after booting.  If those drivers are already
built-in to the kernel then there is no need to modprobe them.

-- 
Rich



Re: [gentoo-user] RAID-1 on secondary disks how?

2019-01-29 Thread Grant Taylor

On 01/29/2019 01:08 PM, Rich Freeman wrote:
Obviously.  Hence the reason I said that it shouldn't matter if the 
module is built in-kernel.


I'm saying it does matter.


I'm not sure why it seems like we're talking past each other here...


You seem to be focusing on the second kernel that the initramfs execs.

I'm talking about the first kernel that the initramfs uses.

The initramfs won't be able to do crap if it doesn't have the device and 
file system drives necessary for the initramfs kernel & init scripts to 
boot.


Now, it might not support a kernel that doesn't support module loading 
at all, though I'm not sure why not.  If it doesn't I could see why the 
developers wouldn't be bothered to address the use case.


Yet another possible reason to dislike dracut.



--
Grant. . . .
unix || die



Re: [gentoo-user] RAID-1 on secondary disks how?

2019-01-29 Thread Rich Freeman
On Tue, Jan 29, 2019 at 2:59 PM Grant Taylor
 wrote:
>
> On 01/29/2019 12:47 PM, Rich Freeman wrote:
> > It couldn't.  Hence the reason I said, "obviously it needs whatever
> > drivers it needs, but I don't see why it would care if they are built
> > -in-kernel vs in-module."
>
> You are missing what I'm saying.
>
> Even the kernel the initramfs uses MUST have support for the file
> systems and devices that the initramfs uses.  You can't load the module
> if you can't get to where the module is or have a place to write it to
> load it.

Obviously.  Hence the reason I said that it shouldn't matter if the
module is built in-kernel.

If your root is on ext4 then the kernel needs an ext4 driver. That
could be built-in, or in a module present in an initramfs, and an
intramfs could support either.  Dracut definitely supports either
config.  Dracut can mount btrfs just fine if btrfs is built in-kernel
and not as a module.

I'm not sure why it seems like we're talking past each other here...

Now, it might not support a kernel that doesn't support module loading
at all, though I'm not sure why not.  If it doesn't I could see why
the developers wouldn't be bothered to address the use case.

-- 
Rich



Re: [gentoo-user] RAID-1 on secondary disks how?

2019-01-29 Thread Rich Freeman
On Tue, Jan 29, 2019 at 2:52 PM Grant Taylor
 wrote:
>
> On 01/29/2019 12:33 PM, Rich Freeman wrote:
>
> > However, as soon as you throw so much as a second hard drive in a system
> > that becomes unreliable.
>
> Mounting the root based on UUID (or labels) is *WONDERFUL*.  It makes
> the system MUCH MORE resilient.  Even if a device somehow gets inserted
> before your root device in the /dev/sd* order.

Interesting.  I didn't realize that linux supported PARTUUID natively.
I'll agree that addresses many more use cases.  I was under the
impression that it required an initramfs - maybe that is a recent
change...

>
> > I'm not saying you can't use linux without an initramfs.  I'm just
> > questioning why most normal people would want to.  I bet that 98% of
> > people who use Linux run an initramfs, and there is a reason for that...
>
> I don't doubt your numbers.
>
> But I do question the viability of them.  How many people that are
> running Linux even know that they have an option?

Most of them probably don't even realize they're running Linux.  :)

People design systems with an initramfs because it is more robust and
covers more use cases, including the use cases that don't require an
initramfs.  It also allows a fully modular kernel, which means less
RAM use/etc on distros that use one-size-fits-all kernels (which is
basically all of them but Gentoo).

-- 
Rich



Re: [gentoo-user] RAID-1 on secondary disks how?

2019-01-29 Thread Grant Taylor

On 01/29/2019 12:47 PM, Rich Freeman wrote:
It couldn't.  Hence the reason I said, "obviously it needs whatever 
drivers it needs, but I don't see why it would care if they are built 
-in-kernel vs in-module."


You are missing what I'm saying.

Even the kernel the initramfs uses MUST have support for the file 
systems and devices that the initramfs uses.  You can't load the module 
if you can't get to where the module is or have a place to write it to 
load it.




--
Grant. . . .
unix || die



Re: [gentoo-user] RAID-1 on secondary disks how?

2019-01-29 Thread Wol's lists

On 29/01/2019 16:48, Alan Mackenzie wrote:

Hello, All.

On Tue, Jan 29, 2019 at 09:32:19 -0700, Grant Taylor wrote:

On 01/29/2019 09:08 AM, Peter Humphrey wrote:

I'd rather not have to create an initramfs if I can avoid it. Would it
be sensible to start the raid volume by putting an mdadm --assemble
command into, say, /etc/local.d/raid.start? The machine doesn't boot
from /dev/md0.



Drive by comment.



I thought there was a kernel option / command line parameter that
enabled the kernel to automatically assemble arrays as it's
initializing.  Would something like that work for you?



I have no idea where that is in the context of what you're working on.


I use mdadm with a RAID-1 pair of SSDs, without an initramfs (YUCK!).
My root partition is on the RAID.

For this, the kernel needs to be able to assemble the drives into the
raid at booting up time, and for that you need version 0.90 metadata.
(Or, at least, you did back in 2017.)


You still do. 0.9 is deprecated and bit-rotting. If  it breaks, nobody 
is going to fix it!!!


My command for building my array was:

 # mdadm --create /dev/md2 --level=1 --raid-devices=2 \
 --metadata=0.90 /dev/nvme0n1p2 /dev/nvme1n1p2.

However, there's another quirk which bit me: something in the Gentoo
installation disk took it upon itself to renumber my /dev/md2 to
/dev/md127.  I raised bug #539162 for this, but it was decided not to
fix it.  (This was back in February 2015.)

This is nothing to do with gentoo - it's mdadm. And like with sdX for 
drives, it's explicitly not guaranteed that the number remains 
consistent. You're supposed to use names now, eg /dev/md/root, or 
/dev/md/home for example.


Cheers,
Wol



Re: [gentoo-user] RAID-1 on secondary disks how?

2019-01-29 Thread Grant Taylor

On 01/29/2019 12:33 PM, Rich Freeman wrote:
If all my boxes could function reliably without an initramfs I probably 
would do it that way.


;-)

However, as soon as you throw so much as a second hard drive in a system 
that becomes unreliable.


I disagree.

I've been reliably booting and running systems with multiple drives for 
almost two decades.  Including various combinations of PATA, SATA, SCSI, 
USB, SAN, drives.


Mounting the root based on UUID (or labels) is *WONDERFUL*.  It makes 
the system MUCH MORE resilient.  Even if a device somehow gets inserted 
before your root device in the /dev/sd* order.


I'm not saying you can't use linux without an initramfs.  I'm just 
questioning why most normal people would want to.  I bet that 98% of 
people who use Linux run an initramfs, and there is a reason for that...


I don't doubt your numbers.

But I do question the viability of them.  How many people that are 
running Linux even know that they have an option?


I suspect the answer to that question is less extreme than you are 
wanting.  I also suspect that it's more in your favor than I want.  But 
70 / 30 (I pulled those from you know where) is significantly different 
than 98 / 2.


A lot of that is situational.  If you have a kernel without btrfs support, 
and you build btrfs as a module and switch your root filesystem to btrfs, 
then obviously you'll need to rebuild your initramfs since the one you 
have can't do btrfs.  But, most people would just rebuild their initramfs 
anytime they rebuild a kernel just to be safe.  If you added btrfs support 
to the kernel (built-in) then it is more of a toss-up, though in the case 
of btrfs specifically you might still need to regenerate the initramfs 
to add the btrfs userspace tools to it if you didn't already have them 
in /usr when you generated it the first time.


But, if you're running btrfs you're probably forced to use an initramfs 
in any case.


That's one of many reasons that I'm not using btrfs.

In any case, it isn't some kind of automatic thing.  Just as some things 
require rebuilding a kernel, some things require rebuilding an initramfs. 
I just find it simplest to build an initramfs anytime I build a kernel, 
and use the make install naming convention so that grub-mkconfig just 
does its thing automatically.


Simplicity is completely independent of necessity.

You have made a choice.  That's your choice to make.

IMO Dracut is one of the most robust solutions for these sorts of 
situations.  It is highly modular, easy to extend, and it really tries 
hard to respect your existing config in /etc.  In fact, not only does 
it put a copy of fstab in the initramfs to help it find your root, but 
after it mounts the root it checks that version of fstab to see if it 
is different and then remounts things accordingly.


I find it simpler and more robust to remove the initramfs complexity if 
I have no /need/ for it.



If you haven't guessed I'm a bit of a Dracut fan.  :)


And I'm a fan of simplicity.



--
Grant. . . .
unix || die



Re: [gentoo-user] RAID-1 on secondary disks how?

2019-01-29 Thread Rich Freeman
On Tue, Jan 29, 2019 at 2:41 PM Grant Taylor
 wrote:
>
> On 01/29/2019 12:01 PM, Rich Freeman wrote:
> >
> > That is news to me.  Obviously it needs whatever drivers it needs, but
> > I don't see why it would care if they are built in-kernel vs in-module.
>
> How is a kernel going to be able to mount the root file system if it
> doesn't have the driver (and can't load) for said root file system type?

It couldn't.  Hence the reason I said, "obviously it needs whatever
drivers it needs, but I don't see why it would care if they are built
-in-kernel vs in-module."

-- 
Rich



Re: [gentoo-user] RAID-1 on secondary disks how?

2019-01-29 Thread Grant Taylor

On 01/29/2019 12:01 PM, Rich Freeman wrote:
Not sure why you would think this.  It is just a cpio archive of a root 
filesystem that the kernel runs as a generic bootstrap.


IMHO the simple fact that such is used when it is not needed is ugly part.

This means that your bootstrap for initializing your root and everything 
else can use any userspace tool that exists for linux.


Why would I want to do that if I don't /need/ to do that?

I can use IPs on VXLAN VTEPs to communicate between two hosts in the 
same L2 broadcast domain too.  But why would I want to do that when the 
simple IP address on the Ethernet interface will suffice?


A similar concept lies at the heart of coreboot - using a generic 
kernel/userpace as a firmware bootloader making it far more flexible.


Coreboot is not part of the operating system.

If you want to talk about the kernel in coreboot taking over the 
kernel's job and removing the boot loader + (2nd) kernel, I'm interested 
in discussing.


An initramfs is basically just a fairly compact linux distro.  It works 
the same as any distro.


IP over the VXLAN VTEP works the same as IP over Ethernet too.

The simple fact that there are two distros (kernel & init scripts) that 
run in succession when there is no /need/ for them is the ugly bit.


Why stop at two distros?  Why not three or four or more?

The kernel runs init, and init does its thing.  By convention that init 
will mount the real root and then exec the init inside, but it doesn't 
have to work that way.  Heck, you can run a system with nothing but an 
initramfs and no other root filesystem.


You can also run a system in the halt run level with everything mounted 
read-only.  I used to run firewalls this way.  Makes them really hard to 
modify without rebooting and altering how they boot.  }:-)


Linus would disagree with you there, and has said as much publicly. 
He does not consider initialization to be the responsibility of kernel 
space long-term, and prefers that this happen in user-space.


~chuckle~  That wouldn't be the first or last time that Linus disagreed 
with someone.


Some of the lvm and mdadm support remains for legacy reasons, but you 
probably won't see initialization of newer volume/etc managers supported 
directly in the kernel.



That is news to me.  Obviously it needs whatever drivers it needs, but 
I don't see why it would care if they are built in-kernel vs in-module.


O.o‽

How is a kernel going to be able to mount the root file system if it 
doesn't have the driver (and can't load) for said root file system type? 
 Or how is it going to extract the initramfs if it doesn't have the 
driver for a ram disk?


The kernel /must/ have (at least) the minimum drivers (and dependencies) 
to be able to boot strap.  It doesn't matter if it's boot strapping an 
initramfs or otherwise.


All of these issues about lack of a driver are avoided by having the 
driver statically compiled into the kernel.


IMHO a kernel and a machine is quite a bit more secure if it doesn't 
support modules.  Almost all of the root kits that I've seen require 
modules.  So the root kits are SIGNIFICANTLY impeded if not completely 
broken if the kernel doesn't support modules.


Sure, if you want to know exactly how it works that is true, but the 
same is true of openrc or any other piece of software on your system.


Yep.

I think it's a LOT easier to administer a system that I understand how 
and why it works.


Dracut is highly modular and phase/hook-driven so it isn't too hard 
to grok.  Most of it is just bash/dash scripts.


That may be the case.  But why even spend time with it if it's not 
actually /needed/.




--
Grant. . . .
unix || die



Re: [gentoo-user] RAID-1 on secondary disks how?

2019-01-29 Thread Rich Freeman
On Tue, Jan 29, 2019 at 2:22 PM Grant Taylor
 wrote:
>
> On 01/29/2019 12:04 PM, Rich Freeman wrote:
> > I don't see the value in using a different configuration on a box simply
> > because it happens to work on that particular box.  Dracut is a more
> > generic solution that allows me to keep hosts the same.
>
> And if all the boxes in the fleet can function without an initramfs?
> Then why have it?  Why not apply Occam's Razor & Parsimony and use the
> simpler solution.  Especially if more complex solutions introduce
> additional things that need to be updated.

If all my boxes could function reliably without an initramfs I
probably would do it that way.  However, as soon as you throw so much
as a second hard drive in a system that becomes unreliable.

I'm not saying you can't use linux without an initramfs.  I'm just
questioning why most normal people would want to.  I bet that 98% of
people who use Linux run an initramfs, and there is a reason for
that...

> > Sure, and I wouldn't expect them to require rebuilding your initramfs
> > either.  I was speaking generally.
>
> Modifying things like crypttab and / or adding / removing file systems
> from the kernel that are required for boot have caused me to need to
> rebuild an initramfs in the past.  But that was not necessarily Gentoo,
> so it may not be a fair comparison.

A lot of that is situational.  If you have a kernel without btrfs
support, and you build btrfs as a module and switch your root
filesystem to btrfs, then obviously you'll need to rebuild your
initramfs since the one you have can't do btrfs.  But, most people
would just rebuild their initramfs anytime they rebuild a kernel just
to be safe.  If you added btrfs support to the kernel (built-in) then
it is more of a toss-up, though in the case of btrfs specifically you
might still need to regenerate the initramfs to add the btrfs
userspace tools to it if you didn't already have them in /usr when you
generated it the first time.

But, if you're running btrfs you're probably forced to use an
initramfs in any case.

In any case, it isn't some kind of automatic thing.  Just as some
things require rebuilding a kernel, some things require rebuilding an
initramfs.  I just find it simplest to build an initramfs anytime I
build a kernel, and use the make install naming convention so that
grub-mkconfig just does its thing automatically.

IMO Dracut is one of the most robust solutions for these sorts of
situations.  It is highly modular, easy to extend, and it really tries
hard to respect your existing config in /etc.  In fact, not only does
it put a copy of fstab in the initramfs to help it find your root, but
after it mounts the root it checks that version of fstab to see if it
is different and then remounts things accordingly.

If you haven't guessed I'm a bit of a Dracut fan.  :)

-- 
Rich



Re: [gentoo-user] RAID-1 on secondary disks how?

2019-01-29 Thread Grant Taylor

On 01/29/2019 12:04 PM, Rich Freeman wrote:
I don't see the value in using a different configuration on a box simply 
because it happens to work on that particular box.  Dracut is a more 
generic solution that allows me to keep hosts the same.


And if all the boxes in the fleet can function without an initramfs? 
Then why have it?  Why not apply Occam's Razor & Parsimony and use the 
simpler solution.  Especially if more complex solutions introduce 
additional things that need to be updated.


Kinda sorta.  The kernel boots one distro which then chroots and execs 
another.  The initramfs follows the exact same rules as any other 
userspace rootfs.


There's no "kinda" to it.  Booting one distro (kernel & set of init 
scripts), then chrooting and execing a new kernel and the booting 
another distro is at least twice as complex as booting a single distro.


This is even more complicated if the first initramfs distro is not 
identical to the main installed distro.  Which is quite likely to be the 
case, or at least a subset of the main installed distro.


IMHO an initramfs is usually, but not always, an unnecessary complication.

Sure, and I wouldn't expect them to require rebuilding your initramfs 
either.  I was speaking generally.


Modifying things like crypttab and / or adding / removing file systems 
from the kernel that are required for boot have caused me to need to 
rebuild an initramfs in the past.  But that was not necessarily Gentoo, 
so it may not be a fair comparison.




--
Grant. . . .
unix || die



Re: [gentoo-user] RAID-1 on secondary disks how?

2019-01-29 Thread Rich Freeman
On Tue, Jan 29, 2019 at 1:54 PM Grant Taylor
 wrote:
>
> On 01/29/2019 10:58 AM, Rich Freeman wrote:
> > Can't say I've tried it recently, but I'd be shocked if it changed much.
> > The linux kernel guys generally consider this somewhat deprecated
> > behavior, and prefer that users use an initramfs for this sort of thing.
> > It is exactly the sort of problem an initramfs was created to fix.
>
> I see no reason to use an initramfs (swingroot) if the kernel can do
> what is needed by itself.

Personally I use dracut on boxes with a single ext4 partition...  To
each his own.  I don't see the value in using a different
configuration on a box simply because it happens to work on that
particular box.  Dracut is a more generic solution that allows me to
keep hosts the same.

> > Honestly, I'd just bite the bullet and use dracut if you want your OS
> > on RAID/etc.
>
> You obviously have a different opinion than Alan and I do.

Thank you!  :)

> > It is basically a one-liner at this point to install and a relatively
> > small tweak to your GRUB config (automatic if using mkconfig).
>
> The dracut command may be a one-liner.  But the alteration to the system
> and it's boot & mount process is CONSIDERABLY more significant.

Kinda sorta.  The kernel boots one distro which then chroots and execs
another.  The initramfs follows the exact same rules as any other
userspace rootfs.

> > Dracut will respect your mdadm.conf, and just about all your other
> > config info in /etc.  The only gotcha is rebuilding your initramfs if
> > it drastically changes (but, drastically changing your root filesystem
> > is something that requires care anyway).
>
> I can think of some drastic changes to the root file system that would
> not require changing the kernel, boot loader, or command line options.

Sure, and I wouldn't expect them to require rebuilding your initramfs
either.  I was speaking generally.

-- 
Rich



Re: [gentoo-user] RAID-1 on secondary disks how?

2019-01-29 Thread Rich Freeman
On Tue, Jan 29, 2019 at 1:39 PM Alan Mackenzie  wrote:
>
> On Tue, Jan 29, 2019 at 12:58:38 -0500, Rich Freeman wrote:
> > Can't say I've tried it recently, but I'd be shocked if it changed
> > much.  The linux kernel guys generally consider this somewhat
> > deprecated behavior, and prefer that users use an initramfs for this
> > sort of thing.  It is exactly the sort of problem an initramfs was
> > created to fix.
>
> An initramfs is conceptually so ugly that I view it as a workaround, not
> a fix, to whatever problem it's applied to.

Not sure why you would think this.  It is just a cpio archive of a
root filesystem that the kernel runs as a generic bootstrap.

This means that your bootstrap for initializing your root and
everything else can use any userspace tool that exists for linux.

A similar concept lies at the heart of coreboot - using a generic
kernel/userpace as a firmware bootloader making it far more flexible.

An initramfs is basically just a fairly compact linux distro.  It
works the same as any distro.  The kernel runs init, and init does its
thing.  By convention that init will mount the real root and then exec
the init inside, but it doesn't have to work that way.  Heck, you can
run a system with nothing but an initramfs and no other root
filesystem.

> It would surely be a bug if the kernel were capable of manipulating RAIDs, 
> but not of initialising
> and mounting them.

Linus would disagree with you there, and has said as much publicly.
He does not consider initialization to be the responsibility of kernel
space long-term, and prefers that this happen in user-space.

Some of the lvm and mdadm support remains for legacy reasons, but you
probably won't see initialization of newer volume/etc managers
supported directly in the kernel.

> > Honestly, I'd just bite the bullet and use dracut if you want your OS
> > on RAID/etc.  It is basically a one-liner at this point to install and
> > a relatively small tweak to your GRUB config (automatic if using
> > mkconfig).  Dracut will respect your mdadm.conf, and just about all
> > your other config info in /etc.  The only gotcha is rebuilding your
> > initramfs if it drastically changes (but, drastically changing your
> > root filesystem is something that requires care anyway).
>
> Well, at the moment my system's not broken, hence doesn't need fixing.
> Last time I looked at Dracut, it would only work in a kernel built with
> modules enabled, ruling out my setup.

That is news to me.  Obviously it needs whatever drivers it needs, but
I don't see why it would care if they are built in-kernel vs
in-module.

> Also, without putting in a LOT of time and study, dracut is a massive,
> opaque mystery.  I've got a pretty good mental picture of how my system
> works, and introducing an initramfs would degrade that picture
> enormously.  That means if any problems happened with the initramfs, I'd
> be faced with many days study to get to grips with it.

Sure, if you want to know exactly how it works that is true, but the
same is true of openrc or any other piece of software on your system.

Dracut is highly modular and phase/hook-driven so it isn't too hard to
grok.  Most of it is just bash/dash scripts.

-- 
Rich



Re: [gentoo-user] RAID-1 on secondary disks how?

2019-01-29 Thread Grant Taylor

On 01/29/2019 10:58 AM, Rich Freeman wrote:
Can't say I've tried it recently, but I'd be shocked if it changed much. 
The linux kernel guys generally consider this somewhat deprecated 
behavior, and prefer that users use an initramfs for this sort of thing. 
It is exactly the sort of problem an initramfs was created to fix.


I see no reason to use an initramfs (swingroot) if the kernel can do 
what is needed by itself.


Honestly, I'd just bite the bullet and use dracut if you want your OS 
on RAID/etc.


You obviously have a different opinion than Alan and I do.  I dislike 
using an initramfs (swingroot) without a specific reason to actually 
/need/ one.  As in the kernel is unable to do what is necessary by 
itself and /requires/ assistance of an initramfs (swingroot).  (An 
encrypted root device or iSCSI connected root device comes to mind as a 
legitimate /need/ for an initramfs (swingroot).)


It is basically a one-liner at this point to install and a relatively 
small tweak to your GRUB config (automatic if using mkconfig).


The dracut command may be a one-liner.  But the alteration to the system 
and it's boot & mount process is CONSIDERABLY more significant.


Dracut will respect your mdadm.conf, and just about all your other 
config info in /etc.  The only gotcha is rebuilding your initramfs if 
it drastically changes (but, drastically changing your root filesystem 
is something that requires care anyway).


I can think of some drastic changes to the root file system that would 
not require changing the kernel, boot loader, or command line options.


But, if you're not using an initramfs you can get the kernel to 
handle this.  Just don't be surprised when it changes your device name 
or whatever.


There are ways to get around the device naming issue.



--
Grant. . . .
unix || die



Re: [gentoo-user] RAID-1 on secondary disks how?

2019-01-29 Thread Alan Mackenzie
Hello, Rich.

On Tue, Jan 29, 2019 at 12:58:38 -0500, Rich Freeman wrote:
> On Tue, Jan 29, 2019 at 11:48 AM Alan Mackenzie  wrote:

> > On Tue, Jan 29, 2019 at 09:32:19 -0700, Grant Taylor wrote:
> > > On 01/29/2019 09:08 AM, Peter Humphrey wrote:
> > > > I'd rather not have to create an initramfs if I can avoid it. Would it
> > > > be sensible to start the raid volume by putting an mdadm --assemble
> > > > command into, say, /etc/local.d/raid.start? The machine doesn't boot
> > > > from /dev/md0.


> > For this, the kernel needs to be able to assemble the drives into the
> > raid at booting up time, and for that you need version 0.90 metadata.
> > (Or, at least, you did back in 2017.)


> Can't say I've tried it recently, but I'd be shocked if it changed
> much.  The linux kernel guys generally consider this somewhat
> deprecated behavior, and prefer that users use an initramfs for this
> sort of thing.  It is exactly the sort of problem an initramfs was
> created to fix.

An initramfs is conceptually so ugly that I view it as a workaround, not
a fix, to whatever problem it's applied to.  It would surely be a bug if
the kernel were capable of manipulating RAIDs, but not of initialising
and mounting them.

> Honestly, I'd just bite the bullet and use dracut if you want your OS
> on RAID/etc.  It is basically a one-liner at this point to install and
> a relatively small tweak to your GRUB config (automatic if using
> mkconfig).  Dracut will respect your mdadm.conf, and just about all
> your other config info in /etc.  The only gotcha is rebuilding your
> initramfs if it drastically changes (but, drastically changing your
> root filesystem is something that requires care anyway).

Well, at the moment my system's not broken, hence doesn't need fixing.
Last time I looked at Dracut, it would only work in a kernel built with
modules enabled, ruling out my setup.

Also, without putting in a LOT of time and study, dracut is a massive,
opaque mystery.  I've got a pretty good mental picture of how my system
works, and introducing an initramfs would degrade that picture
enormously.  That means if any problems happened with the initramfs, I'd
be faced with many days study to get to grips with it.

> But, if you're not using an initramfs you can get the kernel to handle
> this.  Just don't be surprised when it changes your device name or
> whatever.

The kernel seems to leave it alone.  Any Gentoo installation CD I've
used has corrupted the setup, changing all the names to /dev/md127,
/dev/md126, , leaving the victim PC unbootable.  Hence my root
partition is /dev/md127, despite me originally creating it as something
like /dev/md4.

> -- 
> Rich

-- 
Alan Mackenzie (Nuremberg, Germany).



Re: [gentoo-user] RAID-1 on secondary disks how?

2019-01-29 Thread Rich Freeman
On Tue, Jan 29, 2019 at 11:48 AM Alan Mackenzie  wrote:
>
> On Tue, Jan 29, 2019 at 09:32:19 -0700, Grant Taylor wrote:
> > On 01/29/2019 09:08 AM, Peter Humphrey wrote:
> > > I'd rather not have to create an initramfs if I can avoid it. Would it
> > > be sensible to start the raid volume by putting an mdadm --assemble
> > > command into, say, /etc/local.d/raid.start? The machine doesn't boot
> > > from /dev/md0.
>
>
> For this, the kernel needs to be able to assemble the drives into the
> raid at booting up time, and for that you need version 0.90 metadata.
> (Or, at least, you did back in 2017.)
>

Can't say I've tried it recently, but I'd be shocked if it changed
much.  The linux kernel guys generally consider this somewhat
deprecated behavior, and prefer that users use an initramfs for this
sort of thing.  It is exactly the sort of problem an initramfs was
created to fix.

Honestly, I'd just bite the bullet and use dracut if you want your OS
on RAID/etc.  It is basically a one-liner at this point to install and
a relatively small tweak to your GRUB config (automatic if using
mkconfig).  Dracut will respect your mdadm.conf, and just about all
your other config info in /etc.  The only gotcha is rebuilding your
initramfs if it drastically changes (but, drastically changing your
root filesystem is something that requires care anyway).

But, if you're not using an initramfs you can get the kernel to handle
this.  Just don't be surprised when it changes your device name or
whatever.

-- 
Rich



Re: [gentoo-user] RAID-1 on secondary disks how?

2019-01-29 Thread Grant Taylor

On 01/29/2019 09:48 AM, Alan Mackenzie wrote:
However, there's another quirk which bit me: something in the Gentoo 
installation disk took it upon itself to renumber my /dev/md2 to 
/dev/md127.  I raised bug #539162 for this, but it was decided not to 
fix it.  (This was back in February 2015.)


I tend to treat the /dev/md to be somewhat fluid, much like I 
do /dev/sd.  I prefer to use UUID for raw file systems or LV 
names when using LVM for this reason.


$WORK uses sym-links from persistent names to the actual disk that is 
currently the desired disk.




--
Grant. . . .
unix || die



Re: [gentoo-user] RAID-1 on secondary disks how?

2019-01-29 Thread Alan Mackenzie
Hello, All.

On Tue, Jan 29, 2019 at 09:32:19 -0700, Grant Taylor wrote:
> On 01/29/2019 09:08 AM, Peter Humphrey wrote:
> > I'd rather not have to create an initramfs if I can avoid it. Would it 
> > be sensible to start the raid volume by putting an mdadm --assemble 
> > command into, say, /etc/local.d/raid.start? The machine doesn't boot 
> > from /dev/md0.

> Drive by comment.

> I thought there was a kernel option / command line parameter that 
> enabled the kernel to automatically assemble arrays as it's 
> initializing.  Would something like that work for you?

> I have no idea where that is in the context of what you're working on.

I use mdadm with a RAID-1 pair of SSDs, without an initramfs (YUCK!).
My root partition is on the RAID.

For this, the kernel needs to be able to assemble the drives into the
raid at booting up time, and for that you need version 0.90 metadata.
(Or, at least, you did back in 2017.)

My command for building my array was:

# mdadm --create /dev/md2 --level=1 --raid-devices=2 \
--metadata=0.90 /dev/nvme0n1p2 /dev/nvme1n1p2.

However, there's another quirk which bit me: something in the Gentoo
installation disk took it upon itself to renumber my /dev/md2 to
/dev/md127.  I raised bug #539162 for this, but it was decided not to
fix it.  (This was back in February 2015.)

> -- 
> Grant. . . .
> unix || die

-- 
Alan Mackenzie (Nuremberg, Germany).



Re: [gentoo-user] RAID-1 on secondary disks how?

2019-01-29 Thread Grant Taylor

On 01/29/2019 09:08 AM, Peter Humphrey wrote:
I'd rather not have to create an initramfs if I can avoid it. Would it 
be sensible to start the raid volume by putting an mdadm --assemble 
command into, say, /etc/local.d/raid.start? The machine doesn't boot 
from /dev/md0.


Drive by comment.

I thought there was a kernel option / command line parameter that 
enabled the kernel to automatically assemble arrays as it's 
initializing.  Would something like that work for you?


I have no idea where that is in the context of what you're working on.



--
Grant. . . .
unix || die



Re: [gentoo-user] RAID-1 on secondary disks how?

2019-01-29 Thread Mick
On Tuesday, 29 January 2019 16:08:27 GMT Peter Humphrey wrote:
> On Tuesday, 29 January 2019 09:20:46 GMT Mick wrote:
> 
> Hello Mick,
> 
> --->8
> 
> > Do you have CONFIG_MD_RAID1 (or whatever it should be these days) built in
> > your kernel?
> 
> Yes, I have, but something else was missing: CONFIG_DM_RAID=y. This is in
> the SCSI section, which I'd overlooked (I hadn't needed it before because
> the main storage is an NVMe drive). After setting that and rebooting, mdadm
> --create is working as expected.

Good!  I had assumed this was already selected.  ;-)


> > You need to update your initramfs after you configure your array, so your
> > kernel knows what to assemble at boot time when it doesn't yet have access
> > to your mdadm.conf.
> 
> I'd rather not have to create an initramfs if I can avoid it. Would it be
> sensible to start the raid volume by putting an mdadm --assemble command
> into, say, /etc/local.d/raid.start? The machine doesn't boot from /dev/md0.

I think yes, as long as OS filesystem(s) are not on the array, or not mounted 
until the array has been assembled with the correct mdadm.config.

-- 
Regards,
Mick

signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] RAID-1 on secondary disks how?

2019-01-29 Thread Peter Humphrey
On Tuesday, 29 January 2019 09:20:46 GMT Mick wrote:

Hello Mick,

--->8

> Do you have CONFIG_MD_RAID1 (or whatever it should be these days) built in
> your kernel?

Yes, I have, but something else was missing: CONFIG_DM_RAID=y. This is in the 
SCSI section, which I'd overlooked (I hadn't needed it before because the main 
storage is an NVMe drive). After setting that and rebooting, mdadm --create is 
working as expected.

The wiki needs a small addition. I submitted a bug against it but was told not 
to use bugs for this purpose: I should use the wiki page's discussion page. I 
see more head-scratchery coming on...

--->8

> You need to update your initramfs after you configure your array, so your
> kernel knows what to assemble at boot time when it doesn't yet have access
> to your mdadm.conf.

I'd rather not have to create an initramfs if I can avoid it. Would it be 
sensible to start the raid volume by putting an mdadm --assemble command into, 
say, /etc/local.d/raid.start? The machine doesn't boot from /dev/md0.

-- 
Regards,
Peter.






Re: [gentoo-user] RAID-1 on secondary disks how?

2019-01-29 Thread Mick
Hello Peter,

On Monday, 28 January 2019 16:56:57 GMT Peter Humphrey wrote:
> Hello list,

> When I run "mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sda2
> /dev/ sdb2", this is what I get:
> 
> # mdadm --stop /dev/md0
> mdadm: stopped /dev/md0
> # mdadm: /dev/sda2 appears to contain an ext2fs file system
>size=524288000K  mtime=Thu Jan  1 01:00:00 1970
> mdadm: /dev/sdb2 appears to contain an ext2fs file system
>size=524288000K  mtime=Thu Jan  1 01:00:00 1970
> Continue creating array? y
> mdadm: RUN_ARRAY failed: Invalid argument

Do you have CONFIG_MD_RAID1 (or whatever it should be these days) built in 
your kernel?


> If I boot from SysRescCD I also get "device /dev/sda2 exists but is not an
> md device." I also had to "mdadm --stop /dev/md127" since that OS calls the
> first array that.

The SysRescueCD kernel you boot with does not (yet) have access to an 
mdadm.conf and is automatically compiling the RAID array with default 
settings.


> I've tried with a GPT disk header and with an MSDOS one, with similar
> results. Also with /etc/init.d/mdraid not running, with it started on my
> command and with it in the boot runlevel. Each time I changed anything I
> rebooted before trying anything else.
> 
> I must be missing something, in spite of following the wiki instructions.
> Can someone help an old duffer out?

You need to update your initramfs after you configure your array, so your 
kernel knows what to assemble at boot time when it doesn't yet have access to 
your mdadm.conf.

It could help if you examined/posted the contents of your:

cat /etc/mdadm/mdadm.conf
cat /proc/mdstat
dmesg |grep md
mdadm --detail --scan
ls -l /dev/md*

I haven't worked with RAID for some years now, but going from memory the above 
should reveal any discrepancies.

-- 
Regards,
Mick

signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] RAID 1 vs RAID 0 - Read perfonmance

2014-02-24 Thread Jarry

On 24-Feb-14 7:27, Facundo Curti wrote:


n= number of disks

reads:
   raid1: n*2
   raid0: n*2

writes:
   raid1: n
   raid0: n*2

But, in real life, the reads from raid 0 doesn't work at all, because if
you use chunk size from 4k, and you need to read just 2kb (most binary
files, txt files, etc..). the read speed should be just of n.


Definitely not true. Very rarely you need to read just one small file.
Mostly you need many small files (i.e. compilation) or a few big files
(i.e. database). I do not know what load you expect, but in my case
raid0 (with SSD) gave me about twice the r/w speed on heavily-loaded
virtualization platform with many virtual machines. And not only speed
is higher, but also IOPS are splitted to two disks (nearly doubled).

I did some testing with 2xSSD/512GB in raid1, 2xSSD/256GB in raid0 and
3xSSD/256GB in raid5 (I used 840/pro SSD with quite good HW-controller
but I think with mdadm it might be similar). Raid0 was way ahead of
other two configurations in my case.

Finally I went for 4xSSD/256GB in raid10 as I needed both speed and
redundancy...

Jarry

--
___
This mailbox accepts e-mails only from selected mailing-lists!
Everything else is considered to be spam and therefore deleted.



Re: [gentoo-user] RAID 1 vs RAID 0 - Read perfonmance

2014-02-24 Thread Facundo Curti
Thank you all! :) I finally have all clear.
I'm going to do raid 10. Any way, I'm going to do a benchmark before to
install.

Thank you!;)


2014-02-24 14:03 GMT-03:00 Jarry mr.ja...@gmail.com:

 On 24-Feb-14 7:27, Facundo Curti wrote:

  n= number of disks

 reads:
raid1: n*2
raid0: n*2

 writes:
raid1: n
raid0: n*2

 But, in real life, the reads from raid 0 doesn't work at all, because if
 you use chunk size from 4k, and you need to read just 2kb (most binary
 files, txt files, etc..). the read speed should be just of n.


 Definitely not true. Very rarely you need to read just one small file.
 Mostly you need many small files (i.e. compilation) or a few big files
 (i.e. database). I do not know what load you expect, but in my case
 raid0 (with SSD) gave me about twice the r/w speed on heavily-loaded
 virtualization platform with many virtual machines. And not only speed
 is higher, but also IOPS are splitted to two disks (nearly doubled).

 I did some testing with 2xSSD/512GB in raid1, 2xSSD/256GB in raid0 and
 3xSSD/256GB in raid5 (I used 840/pro SSD with quite good HW-controller
 but I think with mdadm it might be similar). Raid0 was way ahead of
 other two configurations in my case.

 Finally I went for 4xSSD/256GB in raid10 as I needed both speed and
 redundancy...

 Jarry

 --
 ___
 This mailbox accepts e-mails only from selected mailing-lists!
 Everything else is considered to be spam and therefore deleted.




Re: [gentoo-user] RAID 1 vs RAID 0 - Read perfonmance

2014-02-23 Thread Kerin Millar

On 24/02/2014 06:27, Facundo Curti wrote:

Hi. I am again, with a similar question to previous.

I want to install RAID on SSD's.

Comparing THEORETICALLY, RAID0 (stripe) vs RAID1 (mirrior). The
performance would be something like this:

n= number of disks

reads:
   raid1: n*2
   raid0: n*2

writes:
   raid1: n
   raid0: n*2

But, in real life, the reads from raid 0 doesn't work at all, because if
you use chunk size from 4k, and you need to read just 2kb (most binary
files, txt files, etc..). the read speed should be just of n.


While the workload does matter, that's not really how it works. Be aware 
that Linux implements read-ahead (defaulting to 128K):-


# blockdev --getra /dev/sda
256

That's enough to populate 32 pages in pagecache, given that PAGESIZE is 
4K on i386/am64.




On the other side, I read over the net, that kernel don't support
multithread reads on raid1. So, the read speed will be just n. Always.
¿It is true?


No, it is not true. Read balancing is implemented in RAID-1.



Anyway, my question is. ¿Who have the best read speed for the day to
day? I'm not asking about reads off large files. I'm just asking in the
normal use. Opening firefox, X, regular files, etc..


For casual usage, it shouldn't make any difference.



I can't find the guide definitive. It allways are talking about
theoretically performance, or about real life but without benchmarks
or reliable data.

Having a RAID0 with SSD, and following [2] on SSD Stripe Optimization
should I have the same speed as an RAID1?


I would highly recommend conducting your own benchmarks. I find sysbench 
to be particularly useful.





My question is because i'm between. 4 disks raid1, or RAID10 (I want
redundancy anyway..). And as raid 10 = 1+ 0. I need to know raid0
performance to take a choice... I don't need write speed, just read.


In Linux, RAID-10 is not really nested because the mirroring and 
striping is fully integrated. If you want the best read performance with 
RAID-10 then the far layout is supposed to be the best [1].


Here is an example of how to choose this layout:

# mdadm -C /dev/md0 -n 4 -l 10 -p f2 /dev/sda /dev/sdb /dev/sdc /dev/sdd

Note, however, that the far layout will exhibit worse performance than 
the near layout if the array is in a degraded state. Also, it 
increases seek time in random/mixed workloads but this should not matter 
if you are using SSDs.


--Kerin

[1] http://neil.brown.name/blog/20040827225440



Re: [gentoo-user] RAID 1 on /boot

2014-02-22 Thread J. Roeleveld
On Sat, February 22, 2014 06:27, Facundo Curti wrote:
 Hi all. I'm new in the list, this is my third message :)
 First at all, I need to say sorry if my english is not perfect. I speak
 spanish. I post here because gentoo-user-es it's middle dead, and it's a
 great chance to practice my english :) Now, the problem.

First of all, there are plenty of people here who don't have English as a
native language. Usually we manage. :)

 I'm going to get a new PC with a disc SSD 120GB and another HDD of 1TB.
 But in a coming future, I want to add 2 or more disks SSD.

 Mi idea now, is:

 Disk HHD: /dev/sda
 /dev/sda1 26GB
 /dev/sda2 90GB
 /dev/sda3 904GB

 Disk SSD: /dev/sdb
 /dev/sdb1 26GB
 /dev/sdb2 90GB
 /dev/sdb3 4GB

 And use /dev/sdb3 as swap. (I will add more with another SSD in future)
 /dev/sda3 mounted in /home/user/data (to save data unused)

Why put the swap on the SSD?

 And a RAID 1 with:
 md0: sda1+sdb1/
 md1: sda2+sdb2/home

 (sda1 and sda2 will be made with the flag: write-mostly. This is useful
 for
 disks slower).
 In a future, I'm going to add more SSD's on this RAID. My idea is the
 fastest I/O.

 Now. My problem/question is:
 Following the gentoo's
 dochttp://www.gentoo.org/doc/es/gentoo-x86+raid+lvm2-quickinstall.xml,
 it says I need to put the flag --metadata=0.9 on the RAID. My question is
 ¿This will make get off the performance?.

metadata=0.9 might be necessary for the BIOS of your computer to see the
/boot partition. If you use an initramfs, you can use any metadata you
like for the root-partition.

 I only found this
 documenthttps://raid.wiki.kernel.org/index.php/RAID_superblock_formats#The_version-0.90_Superblock_Format.
 This says the difference, but nothing about performance and
 advantages/disadvantages.

 Another question is, ¿GRUB2 still unsupporting metadata 1.2?

See reply from Canek.

 In case that metadata get off performance, and GRUB2 doesn't support this.
 ¿Anyone knows how can I fix this to use metadata 1.2?

 I don't partitioned more, because I saw this unnecessary. I just need to
 separate /home in case I need to format the system. But if I need to
 separate /boot to make it work, I don't have problems doing that.

 But of course, /boot also as RAID...

/boot seperate as RAID-1 and metadata=0.9 and you are safe.

 ¿Somebody have any ideas to make it work?

It is similar to what I do, except I don't have SSDs in my desktop.

I have 2 partitions per disk:
1 : /boot (mirrored, raid-1)
2 : LVM (striped, raid-0)
All other partitions (root, /usr, /home, ) are in the LVM.

I use striping for performance reasons for files I currently work with.
All important data is stored and backed up on a server.

For this, an initramfs is required with support for mdraid and lvm.

 Thank you all. Bytes! ;)

You're welcome and good luck.

Please let us know what the performance is like when using the setup you
are thinking off.

--
Joost




Re: [gentoo-user] RAID 1 on /boot

2014-02-22 Thread Stroller

On Sat, 22 February 2014, at 5:27 am, Facundo Curti facu.cu...@gmail.com 
wrote:
 ...
 I'm going to get a new PC with a disc SSD 120GB and another HDD of 1TB. But 
 in a coming future, I want to add 2 or more disks SSD.
 
 Mi idea now, is:
 
 Disk HHD: /dev/sda
   /dev/sda1   26GB
   /dev/sda2   90GB
   /dev/sda3   904GB
 
 Disk SSD: /dev/sdb
   /dev/sdb1   26GB
   /dev/sdb2   90GB
   /dev/sdb3   4GB
 
 And use /dev/sdb3 as swap. (I will add more with another SSD in future)
 /dev/sda3 mounted in /home/user/data (to save data unused)
 
 And a RAID 1 with:
 md0: sda1+sdb1/
 md1: sda2+sdb2/home
 
 (sda1 and sda2 will be made with the flag: write-mostly. This is useful for 
 disks slower).
 In a future, I'm going to add more SSD's on this RAID. My idea is the fastest 
 I/O.


I think you're thinking along the right lines, but I'd use something dedicated 
to the job:

• http://bcache.evilpiepirate.org
• https://wiki.archlinux.org/index.php/Bcache

Stroller.



Re: [gentoo-user] RAID 1 install guide?

2014-02-22 Thread Kerin Millar

On 05/09/2013 07:13, J. Roeleveld wrote:

On Thu, September 5, 2013 05:04, James wrote:

Hello,

What would folks recommend as a Gentoo
installation guide for a 2 disk Raid 1
installation? My previous attempts all failed
to trying to follow (integrate info from)
a myriad-malaise of old docs.


I would start with the Raid+LVM Quick install guide:
http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml


It seems much of the documentation for such is
deprecated, with large disk, newer file systems
(ZFS vs ext4 vs ?) UUID, GPT mdadm,  etc etc.


Depending on the size of the disk, fdisk or gdisk needs to be used.
Filesystems, in my opinion, matter only for the data intended to be put on.

For raid-management, I use mdadm. (Using the linux kernel software raid)
If you have a REAL hardware raid card, I would recommend using that.
(Cheap and/or onboard raid is generally slower then the software raid
implementation in the kernel and the added bonus of being able to recover
the raid using any other linux installation helps.


File system that is best for a Raid 1 workstation?


I use Raid0 (striping) on my workstations with LVM and, mostly, ext4
filesystems. The performance is sufficient for my needs.
All my important data is stored on a NAS with hardware Raid-6, so I don't
care if I loose the data on the workstations.


File system that is best for a Raid 1
(casual usage) web server ?


Whichever filesystem would be best if you don't use Raid.
Raid1 means all data is duplicated, from a performance P.O.V., it is not a
good option. Not sure if distributed reads are implemented yet in the
kernel.


They are. It's perfectly good for performance, provided the array is of 
insufficient magnitude to encounter a bottleneck pertaining to 
controller/bus bandwidth.


--Kerin



Re: [gentoo-user] RAID 1 on /boot

2014-02-22 Thread Kerin Millar

On 22/02/2014 11:41, J. Roeleveld wrote:

On Sat, February 22, 2014 06:27, Facundo Curti wrote:

Hi all. I'm new in the list, this is my third message :)
First at all, I need to say sorry if my english is not perfect. I speak
spanish. I post here because gentoo-user-es it's middle dead, and it's a
great chance to practice my english :) Now, the problem.


First of all, there are plenty of people here who don't have English as a
native language. Usually we manage. :)


I'm going to get a new PC with a disc SSD 120GB and another HDD of 1TB.
But in a coming future, I want to add 2 or more disks SSD.

Mi idea now, is:

 Disk HHD: /dev/sda
/dev/sda1 26GB
/dev/sda2 90GB
/dev/sda3 904GB

 Disk SSD: /dev/sdb
/dev/sdb1 26GB
/dev/sdb2 90GB
/dev/sdb3 4GB

And use /dev/sdb3 as swap. (I will add more with another SSD in future)
/dev/sda3 mounted in /home/user/data (to save data unused)


Why put the swap on the SSD?


And a RAID 1 with:
md0: sda1+sdb1/
md1: sda2+sdb2/home

(sda1 and sda2 will be made with the flag: write-mostly. This is useful
for
disks slower).
In a future, I'm going to add more SSD's on this RAID. My idea is the
fastest I/O.

Now. My problem/question is:
Following the gentoo's
dochttp://www.gentoo.org/doc/es/gentoo-x86+raid+lvm2-quickinstall.xml,
it says I need to put the flag --metadata=0.9 on the RAID. My question is
¿This will make get off the performance?.


It has no impact on performance.



metadata=0.9 might be necessary for the BIOS of your computer to see the
/boot partition. If you use an initramfs, you can use any metadata you
like for the root-partition.


The BIOS should not care at all as it is charged only with loading code 
from the MBR. However, if the intention is to use grub-0.97 then the 
array hosting the filesystem containing /boot should:


  * use RAID-1
  * use the 0.90 superblock format

That way, grub-0.97 can read the filesystem from either block device 
belonging to the array. Doing it any other way requires a bootloader 
that specifically understands md (such as grub2).


There's also the neat trick of installing grub to all disks belonging to 
the array for bootloader redundancy. However, I'm not entirely sure that 
Code Listing 2.35 in the Gentoo doc is correct. Given that particular 
example, I would instead do it like this:-


  grub device (hd0) /dev/sda
  grub root (hd0,0)
  grub setup (hd0)
  grub device (hd0) /dev/sdb
  grub root (hd0,0)
  grub setup (hd0)

The idea there is that, should it ever be necessary to boot from the 
second disk, the disk in question would be the one enumerated first by 
the BIOS (mapping to hd0 in grub). Therefore, grub should be installed 
in that context across all disks. It should not be allowed to boot from 
any given drive and subsequently try to access =(hd1).


With grub2, it's a little easier because it is only necessary to run 
grub-install on each of the drives:


  # grub-install /dev/sda
  # grub-install /dev/sdb




I only found this
documenthttps://raid.wiki.kernel.org/index.php/RAID_superblock_formats#The_version-0.90_Superblock_Format.
This says the difference, but nothing about performance and
advantages/disadvantages.


The 0.90 superblock format is subject to specific limitations that are 
clearly described by that page. For example, it is limited to 28 devices 
in an array, with each device being limited to 2TB in size.


Also, the 0.90 format will cause issues in certain setups because of the 
way that it places its metadata at the end of the block device [1]. That 
said, the 0.90 format does allow for the kernel to construct the array 
without any intervention from userspace so it still has its uses.


The 1.2 format positions the superblock 4KiB from the beginning of the 
device. Note that this has nothing at all to do with the data, which 
usually begins 1MiB in. If you run mdadm -E on a member of such an 
array, the offset will be reported as the Data Offset. For example:


  Data Offset : 2048 sectors

So, it's not a matter of alignment. Rather, the advantage of the 1.2 
format is that it leaves a little space for bootloader code e.g. in case 
you want to create an array from whole disks rather than disk partitions.


None of this matters to me so I tend to stick to the 1.1 format. It 
wouldn't actually make any difference to my particular use case.





Another question is, ¿GRUB2 still unsupporting metadata 1.2?


See reply from Canek.


In case that metadata get off performance, and GRUB2 doesn't support this.
¿Anyone knows how can I fix this to use metadata 1.2?

I don't partitioned more, because I saw this unnecessary. I just need to
separate /home in case I need to format the system. But if I need to
separate /boot to make it work, I don't have problems doing that.

But of course, /boot also as RAID...


/boot seperate as RAID-1 and metadata=0.9 and you are safe.


¿Somebody have any ideas to make it work?


It is similar to what I do, except I don't have SSDs in my desktop.

I have 

Re: [gentoo-user] RAID 1 on /boot

2014-02-22 Thread Facundo Curti
Thank you so much for the help! :) It was very useful.

I just need wait my new PC, and try it *.* jeje.

Bytes! ;)


Re: [gentoo-user] RAID 1 on /boot

2014-02-22 Thread Facundo Curti
Please let us know what the performance is like when using the setup
you are thinking off.

Of course. I will post these here :)


2014-02-22 16:13 GMT-03:00 Facundo Curti facu.cu...@gmail.com:

 Thank you so much for the help! :) It was very useful.

 I just need wait my new PC, and try it *.* jeje.

 Bytes! ;)



Re: [gentoo-user] RAID 1 on /boot

2014-02-21 Thread Canek Peláez Valdés
On Fri, Feb 21, 2014 at 11:27 PM, Facundo Curti facu.cu...@gmail.com wrote:
 Hi all. I'm new in the list, this is my third message :)
 First at all, I need to say sorry if my english is not perfect. I speak
 spanish. I post here because gentoo-user-es it's middle dead, and it's a
 great chance to practice my english :) Now, the problem.

 I'm going to get a new PC with a disc SSD 120GB and another HDD of 1TB. But
 in a coming future, I want to add 2 or more disks SSD.

 Mi idea now, is:

 Disk HHD: /dev/sda
 /dev/sda1 26GB
 /dev/sda2 90GB
 /dev/sda3 904GB

 Disk SSD: /dev/sdb
 /dev/sdb1 26GB
 /dev/sdb2 90GB
 /dev/sdb3 4GB

 And use /dev/sdb3 as swap. (I will add more with another SSD in future)
 /dev/sda3 mounted in /home/user/data (to save data unused)

 And a RAID 1 with:
 md0: sda1+sdb1/
 md1: sda2+sdb2/home

 (sda1 and sda2 will be made with the flag: write-mostly. This is useful for
 disks slower).
 In a future, I'm going to add more SSD's on this RAID. My idea is the
 fastest I/O.

 Now. My problem/question is:
 Following the gentoo's doc, it says I need to put the flag --metadata=0.9 on
 the RAID. My question is ¿This will make get off the performance?.

 I only found this document. This says the difference, but nothing about
 performance and advantages/disadvantages.

I don't know the performance differences, if any, but in my tests
everything worked with the default metadata (1.2).

 Another question is, ¿GRUB2 still unsupporting metadata 1.2?

No, GRUB2 supports it just fine. You just need to use the mdraid1x GRUB2 module.

 In case that metadata get off performance, and GRUB2 doesn't support this.
 ¿Anyone knows how can I fix this to use metadata 1.2?

It's not necessary.

 I don't partitioned more, because I saw this unnecessary. I just need to
 separate /home in case I need to format the system. But if I need to
 separate /boot to make it work, I don't have problems doing that.

It works fine; I tested it in a virtual machine (and I used systemd,
but it should not be significantly different with OpenRC). You can
check my steps in [1] and [2] (with LUKS support).

 But of course, /boot also as RAID...

It works even with boot being on LVM over RAID.

 ¿Somebody have any ideas to make it work?

I didn't tested in a real life machine (I've never been a fan of
neither RAID nor LVM), but in a VM it worked without a hitch. Again,
check [1] and [2].

Hope it helps.

Regards.

[1] http://article.gmane.org/gmane.linux.gentoo.user/269586
[2] http://article.gmane.org/gmane.linux.gentoo.user/269628
-- 
Canek Peláez Valdés
Posgrado en Ciencia e Ingeniería de la Computación
Universidad Nacional Autónoma de México



Re: [gentoo-user] RAID 1 on /boot

2014-02-21 Thread Canek Peláez Valdés
On Sat, Feb 22, 2014 at 12:41 AM, Canek Peláez Valdés can...@gmail.com wrote:
[ snip ]
 [1] http://article.gmane.org/gmane.linux.gentoo.user/269586
 [2] http://article.gmane.org/gmane.linux.gentoo.user/269628

Also, check [3], since the solution on [2] was unnecessarily complex.

Regards.

[3] http://comments.gmane.org/gmane.linux.gentoo.user/269628
-- 
Canek Peláez Valdés
Posgrado en Ciencia e Ingeniería de la Computación
Universidad Nacional Autónoma de México



Re: [gentoo-user] RAID help

2013-10-15 Thread Paul Hartman
On Tue, Oct 15, 2013 at 2:34 AM, Mick michaelkintz...@gmail.com wrote:
 Hi All,

 I haven't had to set up a software RAID for years and now.  I want to set up
 two RAID 1 arrays on a new file server to serve SBM to MSWindows clients.  The
 first RAID1 having two disks, where a multipartition OS installation will take
 place.  The second RAID1 having two disks for a single data partition.

 From what I recall I used mdadm with --auto=mdp, to create a RAID1 from 2
 disks, before I used fdisk to partition the new /dev/md0 as necessary.  All
 this is lost in the fog of time.  Now I read that these days udev names the
 devices/partitions, so I am not sure what the implication of this is and how
 to proceed.

 What is current practice?  Create multiple /dev/mdXs for the OS partitions I
 would want and then stick a fs on each one, or create one /dev/md0 which
 thereafter is formatted with multiple partitions?  Grateful for any pointers
 to resolve my confusion.

One of the best resources is the kernel RAID wiki:
https://raid.wiki.kernel.org/



Re: [gentoo-user] RAID help

2013-10-15 Thread Mick
On Tuesday 15 Oct 2013 20:28:46 Paul Hartman wrote:
 On Tue, Oct 15, 2013 at 2:34 AM, Mick michaelkintz...@gmail.com wrote:
  Hi All,
  
  I haven't had to set up a software RAID for years and now.  I want to set
  up two RAID 1 arrays on a new file server to serve SBM to MSWindows
  clients.  The first RAID1 having two disks, where a multipartition OS
  installation will take place.  The second RAID1 having two disks for a
  single data partition.
  
  From what I recall I used mdadm with --auto=mdp, to create a RAID1 from 2
  disks, before I used fdisk to partition the new /dev/md0 as necessary. 
  All this is lost in the fog of time.  Now I read that these days udev
  names the devices/partitions, so I am not sure what the implication of
  this is and how to proceed.
  
  What is current practice?  Create multiple /dev/mdXs for the OS
  partitions I would want and then stick a fs on each one, or create one
  /dev/md0 which thereafter is formatted with multiple partitions? 
  Grateful for any pointers to resolve my confusion.
 
 One of the best resources is the kernel RAID wiki:
 https://raid.wiki.kernel.org/

Thanks Paul!  It seems that after a cursory look, both ways of partitioning a 
RAID-1 are still available:

https://raid.wiki.kernel.org/index.php/Partitioning_RAID_/_LVM_on_RAID


# df -h
 FilesystemSize  Used Avail Use% Mounted on
 /dev/md2  3.8G  640M  3.0G  18% /
 /dev/md1   97M   11M   81M  12% /boot
 /dev/md5  3.8G  1.1G  2.5G  30% /usr
 /dev/md6  9.6G  8.5G  722M  93% /var/www
 /dev/md7  3.8G  951M  2.7G  26% /var/lib
 /dev/md8  3.8G   38M  3.6G   1% /var/spool
 /dev/md9  1.9G  231M  1.5G  13% /tmp
 /dev/md10 8.7G  329M  7.9G   4% /var/www/html
=

and:

mdadm --create --auto=mdp --verbose /dev/md_d0 --level=mirror --raid-devices=2 
/dev/sda /dev/sdb

which is thereafter partitioned with fdisk.  This is the one I have used in 
the past.


Which one is preferable, or what are the pros  cons of each?

-- 
Regards,
Mick


signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] RAID 1 install guide?

2013-09-05 Thread J. Roeleveld
On Thu, September 5, 2013 05:04, James wrote:
 Hello,

 What would folks recommend as a Gentoo
 installation guide for a 2 disk Raid 1
 installation? My previous attempts all failed
 to trying to follow (integrate info from)
 a myriad-malaise of old docs.

I would start with the Raid+LVM Quick install guide:
http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml

 It seems much of the documentation for such is
 deprecated, with large disk, newer file systems
 (ZFS vs ext4 vs ?) UUID, GPT mdadm,  etc etc.

Depending on the size of the disk, fdisk or gdisk needs to be used.
Filesystems, in my opinion, matter only for the data intended to be put on.

For raid-management, I use mdadm. (Using the linux kernel software raid)
If you have a REAL hardware raid card, I would recommend using that.
(Cheap and/or onboard raid is generally slower then the software raid
implementation in the kernel and the added bonus of being able to recover
the raid using any other linux installation helps.

 File system that is best for a Raid 1 workstation?

I use Raid0 (striping) on my workstations with LVM and, mostly, ext4
filesystems. The performance is sufficient for my needs.
All my important data is stored on a NAS with hardware Raid-6, so I don't
care if I loose the data on the workstations.

 File system that is best for a Raid 1
 (casual usage) web server ?

Whichever filesystem would be best if you don't use Raid.
Raid1 means all data is duplicated, from a performance P.O.V., it is not a
good option. Not sure if distributed reads are implemented yet in the
kernel.

 Time for me to try and spank this gator's ass again.
 (not exactly what happend last time).

I implemented this on 2 machines recently. (Using Raid0) and previously
also on an older server (no longer in use) where I used Raid1.
I always used the Gentoo Raid+LVM guide I mentioned above.

If you have any questions while doing this, feel free to ask on this list.

--
Joost




Re: [gentoo-user] RAID 1 install guide?

2013-09-05 Thread Marc Stürmer

Am 05.09.2013 05:04, schrieb James:

Do you want to use a software raid of hardware raid?


File system that is best for a Raid 1 workstation?


Well, of course only file systems being supported by the rescue system 
of your hosting provider.



File system that is best for a Raid 1
(casual usage) web server ?


Personally I'd go for a software raid and ext4. If you want snapshots, 
put LVM into that, too.


Here's some documentation how to create a software raid in Linux:

https://raid.wiki.kernel.org/index.php/RAID_setup





Re: [gentoo-user] Raid system fails to boot after moving from 2.6 kernel to 3.5

2012-10-26 Thread Neil Bothwick
On Fri, 26 Oct 2012 10:36:38 +0200, Pau Peris wrote:

 As my HD's are on raid 0 mode i use a custom initrd file in order to be
 able to boot. While kernel 2.6 is able to boot without problems the new
 3.5 compiled kernel fails to boot complaining about no block devices
 found. After taking a look at initrd.cpio contained scripts i can see
 the failure message is given by mdadm tool.

Add set -x to the top of your custom init script to see what's going on.


-- 
Neil Bothwick

Why do programmers get Halloween and Christmas confused?
Because oct 31 is the same as dec 25.


signature.asc
Description: PGP signature


Re: [gentoo-user] Raid system fails to boot after moving from 2.6 kernel to 3.5

2012-10-26 Thread J. Roeleveld
Pau Peris sibok1...@gmail.com wrote:

Hi,


i'm running GNU/Gentoo Linux with a custom compiled kernel and i've
just
migrated from a 2.6 kernel to a 3.5.


As my HD's are on raid 0 mode i use a custom initrd file in order to be
able to boot. While kernel 2.6 is able to boot without problems the new
3.5
compiled kernel fails to boot complaining about no block devices
found.
After taking a look at initrd.cpio contained scripts i can see the
failure
message is given by mdadm tool.


Does anyone has a clue about that? Thansk in advanced. :)

Paul,

I had a similar issue with a new system. There it was caused by mdadm trying to 
start the raid devices before all the drives were identified.
Try disabling Asynchronous SCSI scanning in the kernel config. 
(CONFIG_SCSI_SCAN_ASYNC)
Or add scsi_mod.scan=sync to the kernel commandline to see if it's the same 
cause.

--
Joost
-- 
Sent from my Android phone with K-9 Mail. Please excuse my brevity.

Re: [gentoo-user] Raid system fails to boot after moving from 2.6 kernel to 3.5

2012-10-26 Thread Pau Peris
Hi,

thanks a lot for both answers.

I've just checked my kernel config and CONFIG_SCSI_SCAN_ASYNC is not setted
so gonna take a look at it all with set -x.

Thanks :)

2012/10/26 J. Roeleveld jo...@antarean.org

 Pau Peris sibok1...@gmail.com wrote:

 Hi,


 i'm running GNU/Gentoo Linux with a custom compiled kernel and i've just
 migrated from a 2.6 kernel to a 3.5.


 As my HD's are on raid 0 mode i use a custom initrd file in order to be
 able to boot. While kernel 2.6 is able to boot without problems the new 3.5
 compiled kernel fails to boot complaining about no block devices found.
 After taking a look at initrd.cpio contained scripts i can see the failure
 message is given by mdadm tool.


 Does anyone has a clue about that? Thansk in advanced. :)


 Paul,

 I had a similar issue with a new system. There it was caused by mdadm
 trying to start the raid devices before all the drives were identified.
 Try disabling Asynchronous SCSI scanning in the kernel config.
 (CONFIG_SCSI_SCAN_ASYNC)
 Or add scsi_mod.scan=sync to the kernel commandline to see if it's the
 same cause.

 --
 Joost
 --
 Sent from my Android phone with K-9 Mail. Please excuse my brevity.




-- 
Pau Peris Rodriguez
http://www.pauperis.com

Aquest correu electrònic conté informació de caràcter confidencial dirigida
exclusivament al seu/s destinatari/s tant mateix va dirigit exclusivament
als seu/s destinatari/s primari/s present/s. Així, queda prohibida la seva
divulgació, copia o distribució a tercers sense prèvia autorització escrita
per part de Pau Peris Rodriguez. En cas d'haver rebut aquesta informació
per error, es demana que es notifiqui immediatament d'aquesta circumstancia
mitjançant la direcció electrònica de la persona del emissor.


Re: [gentoo-user] Raid system fails to boot after moving from 2.6 kernel to 3.5

2012-10-26 Thread Paul Hartman
On Fri, Oct 26, 2012 at 3:36 AM, Pau Peris sibok1...@gmail.com wrote:
 Hi,


 i'm running GNU/Gentoo Linux with a custom compiled kernel and i've just
 migrated from a 2.6 kernel to a 3.5.


 As my HD's are on raid 0 mode i use a custom initrd file in order to be able
 to boot. While kernel 2.6 is able to boot without problems the new 3.5
 compiled kernel fails to boot complaining about no block devices found.
 After taking a look at initrd.cpio contained scripts i can see the failure
 message is given by mdadm tool.


 Does anyone has a clue about that? Thansk in advanced. :)

There is a bug with certain versions of mdadm failing to assemble
arrays, maybe you using one of the affected versions.
see https://bugs.gentoo.org/show_bug.cgi?id=416081



Re: [gentoo-user] Raid system fails to boot after moving from 2.6 kernel to 3.5

2012-10-26 Thread Pau Peris
Thx a lot Paul,

this morning i noticed there was some kind of issue on my old initrd which
works fine for 2.6 kernels, so created a new initrd which works fine and
let me to boot into GNU/Gentoo Linux with same 3.5 bzImage.

Gonna check if the issue came from mdadm, thx :)

2012/10/26 Paul Hartman paul.hartman+gen...@gmail.com

 On Fri, Oct 26, 2012 at 3:36 AM, Pau Peris sibok1...@gmail.com wrote:
  Hi,
 
 
  i'm running GNU/Gentoo Linux with a custom compiled kernel and i've just
  migrated from a 2.6 kernel to a 3.5.
 
 
  As my HD's are on raid 0 mode i use a custom initrd file in order to be
 able
  to boot. While kernel 2.6 is able to boot without problems the new 3.5
  compiled kernel fails to boot complaining about no block devices found.
  After taking a look at initrd.cpio contained scripts i can see the
 failure
  message is given by mdadm tool.
 
 
  Does anyone has a clue about that? Thansk in advanced. :)

 There is a bug with certain versions of mdadm failing to assemble
 arrays, maybe you using one of the affected versions.
 see https://bugs.gentoo.org/show_bug.cgi?id=416081




-- 
Pau Peris Rodriguez
http://www.pauperis.com

Aquest correu electrònic conté informació de caràcter confidencial dirigida
exclusivament al seu/s destinatari/s tant mateix va dirigit exclusivament
als seu/s destinatari/s primari/s present/s. Així, queda prohibida la seva
divulgació, copia o distribució a tercers sense prèvia autorització escrita
per part de Pau Peris Rodriguez. En cas d'haver rebut aquesta informació
per error, es demana que es notifiqui immediatament d'aquesta circumstancia
mitjançant la direcció electrònica de la persona del emissor.


Re: [gentoo-user] RAID-1 install

2011-07-30 Thread pk
On 2011-07-30 03:04, james wrote:
 Ok so my first issue is the installation media
 and a lack of tools for  GPT (GUID Partition Table).

snip

 the 4k block (GPT) issue? Maybe I missed it 
 on the minimal CD?

If you're after GPT-able partition software you can use (g)parted,
available on the Gentoo live cd (it _should_ handle 4k disks as well):
http://www.gentoo.org/news/20110308-livedvd.xml

HTH

Best regards

Peter K



Re: [gentoo-user] RAID on new install

2011-04-03 Thread Mark Shields
On Thu, Mar 31, 2011 at 2:46 PM, James wirel...@tampabay.rr.com wrote:



 Hello,

 I'm about to install a dual HD (mirrored) gentoo
 software raid system, with BTRFS. Suggestion,
 guides and documents to reference are all welcome.

 I have this link, which is down as the best example:
 http://en.gentoo-wiki.com/wiki/RAID/Software


 Additionally, I have these links for a guide:
 http://www.gentoo.org/doc/en/lvm2.xml
 http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml


 Any other Raid/LVM/BTRFS information I should reference?

 James


 The last guide recommends using raid0 on some partitions; everytime I use
LVM2, I use nothing but raid1 partitions.  I'd rather have the full raid1
than partial raid 1 + speed of raid0.


Re: [gentoo-user] RAID on new install

2011-03-31 Thread Mark Knecht
On Thu, Mar 31, 2011 at 12:46 PM, James wirel...@tampabay.rr.com wrote:


 Hello,

 I'm about to install a dual HD (mirrored) gentoo
 software raid system, with BTRFS. Suggestion,
 guides and documents to reference are all welcome.

 I have this link, which is down as the best example:
 http://en.gentoo-wiki.com/wiki/RAID/Software


 Additionally, I have these links for a guide:
 http://www.gentoo.org/doc/en/lvm2.xml
 http://www.gentoo.org/doc/en/gentoo-x86+raid+lvm2-quickinstall.xml


 Any other Raid/LVM/BTRFS information I should reference?

 James

James,
   Depending on what you are putting onto your RAID watch carefully
what choices you make for SuperBlock type as well as being aware of
possible md name changes between the install environment and your
first real boot.

   I cannot comment on BTRFS and whether it's a good thing to use with
LVS. I've seen varying reports so if this is a learning install then
do what you want and have fun. If it's intended to be an in-service
machine ASAP then possibly look around for more info on that before
starting.

Good luck,
Mark



Re: [gentoo-user] raid autodetection uuid differences

2010-04-17 Thread Volker Armin Hemmann
On Samstag 17 April 2010, David Mehler wrote:
 Hello,
 I've got a new gentoo box with two drives that i'm using raid1 on. On
 boot the md raid autodetection is failing. Here's the error i'm
 getting:
 
 md: Waiting for all devices to be available before autodetect
 md: If you don't use raid, use raid=noautodetect
 md: Autodetecting RAID arrays.
 md: Scanned 4 and added 4 devices.
 md: autorun ...
 md: considering sda3 ...
 md:  adding sda3 ...
 md: sda1 has different UUID to sda3
 md:  adding sdb3 ...
 md: sdb1 has different UUID to sda3
 md: created md3
 md: bindsdb3
 md: bindsda3
 md: running: sda3sdb3
 md: personality for level 1 is not loaded!
 md: do_md_run() returned -22
 md: md3 stopped.
 md: unbindsda3
 md: export_rdev(sda3)
 md: unbindsdb3
 md: export_rdev(sdb3)
 md: considering sda1 ...
 md:  adding sda1 ...
 md:  adding sdb1 ...
 md: created md1
 md: bindsdb1
 md: bindsda1
 md: running: sda1sdb1
 md: personality for level 1 is not loaded!
 md: do_md_run() returned -22
 md: md1 stopped.
 md: unbindsda1
 md: export_rdev(sda1)
 md: unbindsdb1
 md: export_rdev(sdb1)
 md: ... autorun DONE.
 EXT3-fs: unable to read superblock
 FAT: unable to read boot sector
 VFS: Cannot open root device md3 or unknown-block(9,3)
 Please append a correct root= boot option; here are the available
 partitions: 1600 4194302 hdc driver: ide-cdrom
 081020971520 sdb driver: sd
   0811   40131 sdb1
   0812  530145 sdb2
   081320394517 sdb3
 080020971520 sda driver: sd
   0801   40131 sda1
   0802  530145 sda2
   080320394517 sda3
 Kernel panic - not syncing: VFS: Unable to mount root fs on
 unknown-block(9,3) Pid: 1, comm: swapper Not tainted 2.6.32-gentoo-r7 #1
 Call Trace:
  [c12e4bd9] ? panic+0x38/0xd3
  [c143fb34] ? mount_block_root+0x1e9/0x1fd
  [c143fb81] ? mount_root+0x39/0x4d
  [c143fcd7] ? prepare_namespace+0x142/0x168
  [c143f31e] ? kernel_init+0x167/0x172
  [c143f1b7] ? kernel_init+0x0/0x172
  [c100344f] ? kernel_thread_helper+0x7/0x10
 
 I've booted with a live CD and checked the arrays they look good, i'm
 not sure how to correct this UUID issue, any suggestions welcome.
 Thanks.
 Dave.

well, don't make raid1 support a module. Put it into the kernel.



Re: [gentoo-user] raid autodetection uuid differences

2010-04-17 Thread Mark Knecht
On Sat, Apr 17, 2010 at 12:00 PM, David Mehler dave.meh...@gmail.com wrote:
 Hello,
 I've got a new gentoo box with two drives that i'm using raid1 on. On
 boot the md raid autodetection is failing. Here's the error i'm
 getting:

SNIP

 I've booted with a live CD and checked the arrays they look good, i'm
 not sure how to correct this UUID issue, any suggestions welcome.
 Thanks.
 Dave.



Dave,
   I suspect this is the same problem I had two weeks ago. Search for
my thread called:

How does grub assemble a RAID1 for / ??

and read that for background.

If I'm correct this is a metadata issue. You have two choices:

1) What I think you've done is create the RAID1 without specifying
--metadata=0.90. If that's correct then you __must__ use an initramfs
to load mdadm. I'm studying how to do that myself.

2) Rebuild the RAID1 specifying --metadata=0.90 which is the only
metadata type that the kernel can auto-assemble for you at boot time
without an initramfs, and what I'm currently using here.

Hope this helps,
Mark



Re: [gentoo-user] RAID/LVM machine - install questions

2010-03-22 Thread Paul Hartman
On Sun, Mar 21, 2010 at 7:12 AM, KH gentoo-u...@konstantinhansen.de wrote:
 Am 20.03.2010 19:26, schrieb Mark Knecht:
 [...]

 So the chassis and drives for this 1st machine are on order. 6 1TB
 green drives. []
 - Mark


 Hi Mark,

 What do you mean by green drives? I had been told - but never searched for
 confirmation - that those energy saving drives change spinning and also do
 spin down. The problem would be that the drives than might drop out of the
 raid since they are not reachable fast.

 Don't know if that is true. I bought me some black label drives for the
 longer warranty.

If it is a WD drive, google TLER for info about possible problems in RAID use.



Re: [gentoo-user] RAID/LVM machine - install questions

2010-03-22 Thread Mark Knecht
On Mon, Mar 22, 2010 at 8:51 AM, Paul Hartman
paul.hartman+gen...@gmail.com wrote:
 On Sun, Mar 21, 2010 at 7:12 AM, KH gentoo-u...@konstantinhansen.de wrote:
 Am 20.03.2010 19:26, schrieb Mark Knecht:
 [...]

 So the chassis and drives for this 1st machine are on order. 6 1TB
 green drives. []
 - Mark


 Hi Mark,

 What do you mean by green drives? I had been told - but never searched for
 confirmation - that those energy saving drives change spinning and also do
 spin down. The problem would be that the drives than might drop out of the
 raid since they are not reachable fast.

 Don't know if that is true. I bought me some black label drives for the
 longer warranty.

 If it is a WD drive, google TLER for info about possible problems in RAID 
 use.


Yeah, those issues do get discussed at times on the Linux RAID list.
I've asked questions about it and been told that Linux software RAID
depends totally on what the driver tells it and nothing seems to be
don (as best I can tell) based on any fixed time. That's more of a
hardware controller issue. I was told that if the drive by itself
doesn't fail at the system level when it's spinning up, then it won't
fail at the RAID level either. However what it does if it has a
hardware error is a bit beyond me at this point. My intention is to
try and get better with smartd so that the drive is continually
monitored and see if I can get ahead of a failure with that.



Re: [gentoo-user] RAID/LVM machine - install questions

2010-03-21 Thread KH

Am 20.03.2010 19:26, schrieb Mark Knecht:
[...]

So the chassis and drives for this 1st machine are on order. 6 1TB
green drives. []
- Mark



Hi Mark,

What do you mean by green drives? I had been told - but never searched 
for confirmation - that those energy saving drives change spinning and 
also do spin down. The problem would be that the drives than might drop 
out of the raid since they are not reachable fast.


Don't know if that is true. I bought me some black label drives for the 
longer warranty.


kh



Re: [gentoo-user] RAID/LVM machine - install questions

2010-03-21 Thread KH

Am 20.03.2010 19:29, schrieb Mark Knecht:
[...]


I'm thinking I'll keep it as simple as possibly and just spread out
the Gentoo install over the multiple hard drives without using RAID,
but maybe not. It would be nice to have everything on RAID but I don't
know if I should byte that off for my first taste of building RAID.

[...]

Very helpful. Thanks!

Cheers,
Mark



Hi,

I have boot on raid1 and everything else on raid5. Also swap is raid5. 
It wasn't hard to do that.
If I did it again, I would also create a small (5GB) raid5 for testing 
stuff. Like when I try to reassemble or change something. Copy some 
movies and music to that drive. Whenever you need to change something 
with your real raid, do it with the test one first and see if you can 
still listen to your music.


Regards
kh



Re: [gentoo-user] RAID/LVM machine - install questions

2010-03-21 Thread Florian Philipp
Am 20.03.2010 19:26, schrieb Mark Knecht:
 On Sat, Mar 20, 2010 at 9:38 AM, KH gentoo-u...@konstantinhansen.de wrote:
 Mark Knecht schrieb:


 Smiling broadly... :-) Yeah.. Well, keeping my wife's data safe
 keeps me happy. :-)
 
 So the chassis and drives for this 1st machine are on order. 6 1TB
 green drives. Now I just need to decide what sort of RAID to use. I
 don't need much speed writing so I'm thinking maybe a 3 drive RAID1
 setup with a hot spare managed using mdadm and then LVM on top of it.
 

With 4 drives, you could build a RAID-6, too. It's like a RAID-5 but
protects against failure of any two drives.



signature.asc
Description: OpenPGP digital signature


Re: [gentoo-user] RAID/LVM machine - install questions

2010-03-20 Thread Florian Philipp
Am 19.03.2010 23:40, schrieb Mark Knecht:
[...]
 
The LVM Install doc is pretty clear about not putting these in LVM:
 
 /etc, /lib, /mnt, /proc, /sbin, /dev, and /root
 

/boot shouldn't be there, either. Not sure about /bin

 which seems sensible. From an install point of view I'm wondering
 about RAID and how I should treat /, /boot and swap? As I'm planning
 on software RAID it seems that maybe those part of the file system
 should not even be part of RAID. Is that sensible? I always want /
 available to mount the directories above. /boot on RAID means (I
 guess) that I'd need RAID in the kernel instead of modular, and why do
 I need swap on RAID?
[...] 

If you use kernel based software RAID (mdadm, not dmraid), you can put
everything except of /boot on RAID. Even for /boot, there are
workarounds. I think there was a thread about it very recently right on
this list.

If you don't want to use an initrd (and believe me, you don't), you
cannot build the RAID components as modules, of course. But why would
you want? You need it anyway all the time between bootup and shutdown.

You don't need to put swap on a RAID. Swap has its own system for
implementing RAID-1 or RAID-0-like functionality. Using a RAID-1 for it
prevents the machine from crashing if the disk on which swap resides
dies. RAID-0 would be faster, of course.

I personally find it easier to put swap on LVM in order to make
management easier. However, if you want to use suspend-to-disk (a.k.a.
hibernate), you would need an initrd, again.

Alternatively, you can also use LVM for mirroring (RAID-1) or striping
(RAID-0) single volumes. I think this only makes sense if you just want
to protect some single volumes. After using it for some time, I found it
not worth the effort. With current disk prices, just mirror everything
and live easy ;-)

Hope this helps,
Florian Philipp



signature.asc
Description: OpenPGP digital signature


Re: [gentoo-user] RAID/LVM machine - install questions

2010-03-20 Thread KH

Mark Knecht schrieb:

Hi,

[...]

3) Wife's new desktop

[...]

I want high reliability

[...]

The most important task of this machine is to keep data safe.

[...]


Thanks,
Mark



Hi Mark,

For me it sounds like those points just don't fit together ;-)

Regards
kh



Re: [gentoo-user] RAID/LVM machine - install questions

2010-03-20 Thread Mark Knecht
On Sat, Mar 20, 2010 at 9:38 AM, KH gentoo-u...@konstantinhansen.de wrote:
 Mark Knecht schrieb:

 Hi,

 [...]

 3) Wife's new desktop

 [...]

 I want high reliability

 [...]

 The most important task of this machine is to keep data safe.

 [...]

 Thanks,
 Mark


 Hi Mark,

 For me it sounds like those points just don't fit together ;-)

 Regards
 kh


Smiling broadly... :-) Yeah.. Well, keeping my wife's data safe
keeps me happy. :-)

So the chassis and drives for this 1st machine are on order. 6 1TB
green drives. Now I just need to decide what sort of RAID to use. I
don't need much speed writing so I'm thinking maybe a 3 drive RAID1
setup with a hot spare managed using mdadm and then LVM on top of it.

The main backup data will be coming from another machine I'm building
that runs a new i7 980x 12 core processor with 24GB of DRAM. That
machine will run 5 copies of VirtualBox/Windows 7 using 2 cores + 4GB
DRAM for each, leaving 2 cores and 4GB for Gentoo as the host OS. Each
Windows instance crunches numbers 24/7 and needs to be backed up once
a day with each backup being about 20GB. I'll move approximately 100GB
across the network each night to the 1st machine at least once a day,
possibly more. This data needs to be very safe so once every week it
goes offsite also.

Other than that the 1st machine will also be a MythTV backend server.
That's a pretty light load hardware wise but uses a little computing
muscle to do commercial detection and the like.

And then yes, my wife can use it as her main desktop as most of the
above work takes place overnight with Myth being in the evening and
backups of machine #1 occurring at 4AM, etc.

- Mark



Re: [gentoo-user] RAID/LVM machine - install questions

2010-03-20 Thread Mark Knecht
On Sat, Mar 20, 2010 at 6:22 AM, Florian Philipp
li...@f_philipp.fastmail.net wrote:
 Am 19.03.2010 23:40, schrieb Mark Knecht:
 [...]

    The LVM Install doc is pretty clear about not putting these in LVM:

 /etc, /lib, /mnt, /proc, /sbin, /dev, and /root


 /boot shouldn't be there, either. Not sure about /bin

 which seems sensible. From an install point of view I'm wondering
 about RAID and how I should treat /, /boot and swap? As I'm planning
 on software RAID it seems that maybe those part of the file system
 should not even be part of RAID. Is that sensible? I always want /
 available to mount the directories above. /boot on RAID means (I
 guess) that I'd need RAID in the kernel instead of modular, and why do
 I need swap on RAID?
[...]

 If you use kernel based software RAID (mdadm, not dmraid), you can put
 everything except of /boot on RAID. Even for /boot, there are
 workarounds. I think there was a thread about it very recently right on
 this list.

I'm thinking I'll keep it as simple as possibly and just spread out
the Gentoo install over the multiple hard drives without using RAID,
but maybe not. It would be nice to have everything on RAID but I don't
know if I should byte that off for my first taste of building RAID.



 If you don't want to use an initrd (and believe me, you don't), you
 cannot build the RAID components as modules, of course. But why would
 you want? You need it anyway all the time between bootup and shutdown.

No initrd. I've never used it in 10 years of running Linux and I
wouldn't know how to start or even why I would use it. I suppose if I
had hardware RAID then maybe I'd need to but that's not my plan.


 You don't need to put swap on a RAID. Swap has its own system for
 implementing RAID-1 or RAID-0-like functionality. Using a RAID-1 for it
 prevents the machine from crashing if the disk on which swap resides
 dies. RAID-0 would be faster, of course.

 I personally find it easier to put swap on LVM in order to make
 management easier. However, if you want to use suspend-to-disk (a.k.a.
 hibernate), you would need an initrd, again.

 Alternatively, you can also use LVM for mirroring (RAID-1) or striping
 (RAID-0) single volumes. I think this only makes sense if you just want
 to protect some single volumes. After using it for some time, I found it
 not worth the effort. With current disk prices, just mirror everything
 and live easy ;-)

 Hope this helps,
 Florian Philipp



Very helpful. Thanks!

Cheers,
Mark



Re: [gentoo-user] Raid 5 creation is slow - Can this be done quicker? [SOLVED]

2010-02-08 Thread J. Roeleveld
On Monday 01 February 2010 12:58:49 J. Roeleveld wrote:
 Hi All,
 
 I am currently installing a new server and am using Linux software raid to
 merge 6 * 1.5TB drives in a RAID5 configuration.
 
 Creating the RAID5 takes over 20 hours (according to  cat /proc/mdstat )
 
 Is there a way that will speed this up? The drives are new, but contain
  random data left over from some speed and reliability tests I did. I don't
  care about keeping the current 'data', as long as when the array is
  reliable later.
 
 Can I use the  --assume-clean  option with mdadm and then expect it to
  keep working, even through reboots?
 Or is this a really bad idea?
 
 Many thanks,
 
 Joost Roeleveld
 

Hi all,

Many thanks for all the input, I did wait the 20 hours, but when it was 
finished, the performance was still slow. And trying out different options for 
the array didn't actually help.

Thanks to the thread 1-Terabyte drives - 4K sector sizes? - bar performance 
so far I figured out the problem (4KB sectors).
After changing the partitions to use sector 64 as start (as opposed to 63) a 
build of the array should only take 6 hours.
Hopefully, the raid-array will also show a better performance when this is 
finished.

--
Joost Roeleveld



Re: [gentoo-user] Raid 5 creation is slow - Can this be done quicker?

2010-02-01 Thread Kyle Bader
Most of the wait I would assume is due to the size of the volume and
creating parity.  If it was my array I'd probably just sit tight and
wait it out.

On 2/1/10, J. Roeleveld jo...@antarean.org wrote:
 Hi All,

 I am currently installing a new server and am using Linux software raid to
 merge 6 * 1.5TB drives in a RAID5 configuration.

 Creating the RAID5 takes over 20 hours (according to  cat /proc/mdstat )

 Is there a way that will speed this up? The drives are new, but contain
 random
 data left over from some speed and reliability tests I did. I don't care
 about
 keeping the current 'data', as long as when the array is reliable later.

 Can I use the  --assume-clean  option with mdadm and then expect it to
 keep
 working, even through reboots?
 Or is this a really bad idea?

 Many thanks,

 Joost Roeleveld



-- 
Sent from my mobile device


Kyle



Re: [gentoo-user] Raid 5 creation is slow - Can this be done quicker?

2010-02-01 Thread Stroller


On 1 Feb 2010, at 11:58, J. Roeleveld wrote:

...
I am currently installing a new server and am using Linux software  
raid to

merge 6 * 1.5TB drives in a RAID5 configuration.

Creating the RAID5 takes over 20 hours (according to  cat /proc/ 
mdstat )


Is there a way that will speed this up? The drives are new, but  
contain random
data left over from some speed and reliability tests I did. I don't  
care about
keeping the current 'data', as long as when the array is reliable  
later.


Can I use the  --assume-clean  option with mdadm and then expect  
it to keep

working, even through reboots?
Or is this a really bad idea?



It wasn't my intention to chide you - I don't use software RAID  
myself, and your question piqued my curiosity - but the first three  
Google hits for assume-clean indicate that this isn't safe to use  
with RAID5.


The 4th Google hit contains an extract from the manpage:

  ... It can
  also be used when creating a RAID1 or RAID10 if you want
  to avoid the initial resync, however this practice --
  while normally safe -- is not recommended. Use this
  only if you really know what you are doing.


I have to say that I don't fully understand this. I would have thought  
that one could pretend the entire array was empty, and the RAID driver  
would just overwrite the disk as you write to the filesystem. The  
parts used by the filesystem are the only parts you care about, and I  
wouldn't have thought it would matter if the unused parts weren't in  
sync. I would be delighted if someone could explain me.


I kinda expected this 20 hours to be spent verifying that the disks  
contain no bad sectors, which would really hose you if it were the case.


But OTOH, 20 hours does not seem an outrageous amount of time for  
building a 7.5TB array. You're not going to do this often, and you  
want it done right.


It would be interesting to know whether hardware RAID would behave any  
differently or allow the sync to perform in the background. I have  
only 1.5TB in RAID5 across 4 x 500gb drives at present; IIRC the  
expansion from 3 x drives took some hours, but I can't recall the  
initial setup.


Stroller.




Re: [gentoo-user] Raid 5 creation is slow - Can this be done quicker?

2010-02-01 Thread J. Roeleveld
On Monday 01 February 2010 14:20:28 Stroller wrote:
 On 1 Feb 2010, at 11:58, J. Roeleveld wrote:
  ...
  I am currently installing a new server and am using Linux software
  raid to
  merge 6 * 1.5TB drives in a RAID5 configuration.
 
  Creating the RAID5 takes over 20 hours (according to  cat /proc/
  mdstat )
 
  Is there a way that will speed this up? The drives are new, but
  contain random
  data left over from some speed and reliability tests I did. I don't
  care about
  keeping the current 'data', as long as when the array is reliable
  later.
 
  Can I use the  --assume-clean  option with mdadm and then expect
  it to keep
  working, even through reboots?
  Or is this a really bad idea?
 
 It wasn't my intention to chide you - I don't use software RAID
 myself, and your question piqued my curiosity - but the first three
 Google hits for assume-clean indicate that this isn't safe to use
 with RAID5.
 
 The 4th Google hit contains an extract from the manpage:
 
... It can
also be used when creating a RAID1 or RAID10 if you want
to avoid the initial resync, however this practice --
while normally safe -- is not recommended. Use this
only if you really know what you are doing.

I did find the same results on Google, but not really a proper explanation as 
to why it's a bad idea. Unfortunately, my budget doesn't extend to a 
hardware raid solution. (The cheap cards offload it to the CPU anyway and are 
generally considered slower in various benchmarks)

 I kinda expected this 20 hours to be spent verifying that the disks
 contain no bad sectors, which would really hose you if it were the case.

True, but I already ran badblocks twice on each disk to verify that the 
disks are fine. (No badblocks found).

 But OTOH, 20 hours does not seem an outrageous amount of time for
 building a 7.5TB array. You're not going to do this often, and you
 want it done right.

Good point, and I agree, which is why I will let it finish it's course, but I 
also expected it could be done quicker.

 It would be interesting to know whether hardware RAID would behave any
 differently or allow the sync to perform in the background. I have
 only 1.5TB in RAID5 across 4 x 500gb drives at present; IIRC the
 expansion from 3 x drives took some hours, but I can't recall the
 initial setup.

I'm hoping someone with more knowledge about RAID-systems can throw in his/her 
2cents.

Thanks,

Joost



Re: [gentoo-user] Raid 5 creation is slow - Can this be done quicker?

2010-02-01 Thread Kyle Bader
 It would be interesting to know whether hardware RAID would behave any
 differently or allow the sync to perform in the background. I have
 only 1.5TB in RAID5 across 4 x 500gb drives at present; IIRC the
 expansion from 3 x drives took some hours, but I can't recall the
 initial setup.

LSI, 3ware and Areca hardware raid controllers are capable of doing a
background init  but their performance is impacted, I can't speak on
other controllers as I haven't used them before.  I've built many
RAID6 arrays with all three controllers - 8x 1TB and 8x 1.5TB and I'll
usually start a foreground init and let them run overnight because it
does take a long time.  Also, RAID10 is much faster to get up and
running because it doesn't have to calculate parity.

-- 

Kyle



Re: [gentoo-user] RAID controller

2009-02-15 Thread Alex

Mick wrote:

Hi,


Hi All,

I am thinking of installing Gentoo on a Dell box with this RAID controller:

http://support.dell.com/support/edocs/storage/RAID/PERC5/en/UG/HTML/chapter1.htm

Has anyone got experience with this hardware?  What will I need to include in 
the kernel?  Will I need any fancy firmware/drivers?


You can use megaraid driver for Perc5.

Device Driver - SCSI device support - SCSI low-level drivers

   [*]   LSI Logic New Generation RAID Device Drivers 

  │ │LSI Logic Management Module 
(New Driver)
  │ │  *   LSI Logic Legacy MegaRAID 
Driver
  │ │  *   LSI Logic MegaRAID SAS RAID 
Module


This is my controller

linux # lspci  | grep -i scsi
01:00.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1068E 
PCI-Express Fusion-MPT SAS (rev 08)


I have a 2950 II.



Re: [gentoo-user] RAID controller

2009-02-15 Thread Mick
On Sunday 15 February 2009, Alex wrote:
 Mick wrote:

 Hi,

  Hi All,
 
  I am thinking of installing Gentoo on a Dell box with this RAID
  controller:
 
  http://support.dell.com/support/edocs/storage/RAID/PERC5/en/UG/HTML/chapt
 er1.htm
 
  Has anyone got experience with this hardware?  What will I need to
  include in the kernel?  Will I need any fancy firmware/drivers?

 You can use megaraid driver for Perc5.

 Device Driver - SCSI device support - SCSI low-level drivers

 [*]   LSI Logic New Generation RAID Device Drivers

│ │LSI Logic Management Module
 (New Driver)
│ │  *   LSI Logic Legacy MegaRAID
 Driver
│ │  *   LSI Logic MegaRAID SAS RAID
 Module

 This is my controller

 linux # lspci  | grep -i scsi
 01:00.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1068E
 PCI-Express Fusion-MPT SAS (rev 08)

 I have a 2950 II.

Nice!

Thank you very much.
-- 
Regards,
Mick


signature.asc
Description: This is a digitally signed message part.


Re: must not (was Re: [gentoo-user] Raid reports wrong size)

2008-12-20 Thread Matthias Fechner

Hi Paul,

Paul Hartman wrote:

1 a: be commanded or requested to you must stop b: be urged to :
ought by all means to you must read that book

2: be compelled by physical necessity to one must eat to live : be
required by immediate or future need or purpose to we must hurry to
catch the bus

3 a: be obliged to : be compelled by social considerations to I must
say you're looking well b: be required by law, custom, or moral
conscience to we must obey the rules c: be determined to if you
must go at least wait for me d: be unreasonably or perversely
compelled to why must you argue

4: be logically inferred or supposed to it must be time

5: be compelled by fate or by natural law to what must be will be

6: was or were presumably certain to : was or were bound to if he did
it she must have known

7dialect : may  , shall —used chiefly in questions


hehe thats really interesting :)
But set this option in the kernel fixed my problem :)
And no, I´m not running a 64bit kernel, that brings no advantage for the 
system.


Thanks a lot for the small language lesson.

Have a merry Christmas and a happy New Year.

TIA,
Matthias

--
Programming today is a race between software engineers striving to 
build bigger and better idiot-proof programs, and the universe trying to 
produce bigger and better idiots. So far, the universe is winning. -- 
Rich Cook




Re: must not (was Re: [gentoo-user] Raid reports wrong size)

2008-12-20 Thread Peter Humphrey
On Friday 19 December 2008 20:53:47 Paul Hartman wrote:

 Yes, in English must can also mean that you infer or presume
 something.

s/presume/assume/

(Not the same meaning, in spite of popular misuse.)

-- 
Rgds
Peter



Re: [gentoo-user] Raid reports wrong size

2008-12-19 Thread Shaochun Wang
On Thu, Dec 18, 2008 at 11:45:58PM +0100, Matthias Fechner wrote:
 Hi Dirk,
 
 Dirk Heinrichs schrieb:
  Kernel w/o CONFIG_LBD?
 
 thanks a lot!
 
Your kernel must not be 64bits, I think.

-- 
Shaochun Wang scw...@ios.ac.cn

Jabber: fung...@jabber.org



Re: [gentoo-user] Raid reports wrong size

2008-12-19 Thread Dirk Heinrichs
Am Freitag, 19. Dezember 2008 14:03:04 schrieb Shaochun Wang:

 Your kernel must not be 64bits, I think.

Why not is he not allowed to run a 64bit kernel?

Bye...

Dirk



signature.asc
Description: This is a digitally signed message part.


Re: [gentoo-user] Raid reports wrong size

2008-12-19 Thread Volker Armin Hemmann
On Freitag 19 Dezember 2008, Dirk Heinrichs wrote:
 Am Freitag, 19. Dezember 2008 14:03:04 schrieb Shaochun Wang:
  Your kernel must not be 64bits, I think.

 Why not is he not allowed to run a 64bit kernel?

 Bye...

   Dirk

the option is not available with 64bits - maybe not needed.





must not (was Re: [gentoo-user] Raid reports wrong size)

2008-12-19 Thread Dirk Heinrichs
Am Freitag, 19. Dezember 2008 19:24:12 schrieb Volker Armin Hemmann:
 On Freitag 19 Dezember 2008, Dirk Heinrichs wrote:
  Am Freitag, 19. Dezember 2008 14:03:04 schrieb Shaochun Wang:
   Your kernel must not be 64bits, I think.
 
  Why not is he not allowed to run a 64bit kernel?
 

 the option is not available with 64bits - maybe not needed.

Yes I know. Just wanted to clarify wether there's a misunderstanding about 
must not, which means darf nicht in german or is not allowed to. Seems 
like Shaochun is not a native english speaker and seems to make the same 
mistake I also did for a long time.

Bye...

Dirk


signature.asc
Description: This is a digitally signed message part.


Re: must not (was Re: [gentoo-user] Raid reports wrong size)

2008-12-19 Thread Paul Hartman
On Fri, Dec 19, 2008 at 2:11 PM, Dirk Heinrichs
dirk.heinri...@online.de wrote:
 Am Freitag, 19. Dezember 2008 19:24:12 schrieb Volker Armin Hemmann:
 On Freitag 19 Dezember 2008, Dirk Heinrichs wrote:
  Am Freitag, 19. Dezember 2008 14:03:04 schrieb Shaochun Wang:
   Your kernel must not be 64bits, I think.
 
  Why not is he not allowed to run a 64bit kernel?
 

 the option is not available with 64bits - maybe not needed.

 Yes I know. Just wanted to clarify wether there's a misunderstanding about
 must not, which means darf nicht in german or is not allowed to. Seems
 like Shaochun is not a native english speaker and seems to make the same
 mistake I also did for a long time.

Yes, in English must can also mean that you infer or presume
something. So, instead of your kernel must not be 64bits, maybe it
would have been clearer to say I suspect you are not using a 64-bit
kernel; if you were, it would not have this problem. :)

Paul



Re: must not (was Re: [gentoo-user] Raid reports wrong size)

2008-12-19 Thread Dirk Heinrichs
Am Freitag, 19. Dezember 2008 21:53:47 schrieb Paul Hartman:
 Yes, in English must can also mean that you infer or presume
 something.

Ah, yes. I remember :-)

 So, instead of your kernel must not be 64bits, maybe it
 would have been clearer to say I suspect you are not using a 64-bit
 kernel; if you were, it would not have this problem. :)

So can your kernel must not... be understood as I suspect your kernel is 
not...? Wasn't aware of this... Thanks for clarifying.

Bye...

Dirk


signature.asc
Description: This is a digitally signed message part.


Re: must not (was Re: [gentoo-user] Raid reports wrong size)

2008-12-19 Thread Paul Hartman
On Fri, Dec 19, 2008 at 3:13 PM, Dirk Heinrichs
dirk.heinri...@online.de wrote:
 Am Freitag, 19. Dezember 2008 21:53:47 schrieb Paul Hartman:
 Yes, in English must can also mean that you infer or presume
 something.

 Ah, yes. I remember :-)

 So, instead of your kernel must not be 64bits, maybe it
 would have been clearer to say I suspect you are not using a 64-bit
 kernel; if you were, it would not have this problem. :)

 So can your kernel must not... be understood as I suspect your kernel is
 not...? Wasn't aware of this... Thanks for clarifying.

Yes, exactly. It is confusing, especially if you are used to languages
that have proper rules. I think the only rule in English is there are
no rules in English :) :) Here are English dictionary definitions for
must when used as a verb. I think in this case numbers 4 or 7 could
apply.


1 a: be commanded or requested to you must stop b: be urged to :
ought by all means to you must read that book

2: be compelled by physical necessity to one must eat to live : be
required by immediate or future need or purpose to we must hurry to
catch the bus

3 a: be obliged to : be compelled by social considerations to I must
say you're looking well b: be required by law, custom, or moral
conscience to we must obey the rules c: be determined to if you
must go at least wait for me d: be unreasonably or perversely
compelled to why must you argue

4: be logically inferred or supposed to it must be time

5: be compelled by fate or by natural law to what must be will be

6: was or were presumably certain to : was or were bound to if he did
it she must have known

7dialect : may  , shall —used chiefly in questions



Re: must not (was Re: [gentoo-user] Raid reports wrong size)

2008-12-19 Thread Neil Bothwick
On Fri, 19 Dec 2008 22:13:11 +0100, Dirk Heinrichs wrote:

  So, instead of your kernel must not be 64bits, maybe it
  would have been clearer to say I suspect you are not using a 64-bit
  kernel; if you were, it would not have this problem. :)  
 
 So can your kernel must not... be understood as I suspect your
 kernel is not...? Wasn't aware of this... Thanks for clarifying.

It's more like I am assuming your kernel is not. Either way, it's a
highly ambiguous sentence :(


-- 
Neil Bothwick

I have seen things you lusers would not believe.
I've seen Sun monitors on fire off the side of the multimedia lab.
I've seen NTU lights glitter in the dark near the Mail Gate.
All these things will be lost in time, like the root partition last
week. Time to die.


signature.asc
Description: PGP signature


  1   2   >