bug#38086: RAID installation script with ‘mdadm’ no longer works

2020-01-18 Thread Tobias Geerinckx-Rice via Bug reports for GNU Guix

Ludovic Courtès 写道:
As you can see, it’s attempting to make a RAID1 device out of 
two
partitions (not two disks), which makes no sense in the real 
world, but
is easier to handle here.  So I wonder if this is what’s causing 
it to

hang…


It's just waiting for input:

 $ # dd & losetup magic, where loop0 is 20% larger than loop1
 $ sudo mdadm --create /dev/md0 --verbose --level=mirror 
 --raid-devices=2 /dev/loop{0,1}

 mdadm: Note: this array has metadata at the start and
   may not be suitable as a boot device.  If you plan to
   store '/boot' on this device please ensure that
   your boot-loader understands md/v1.x metadata, or use
   --metadata=0.90
 mdadm: size set to 101376K
 mdadm: largest drive (/dev/loop1) exceeds size (101376K) by more 
 than 1%

 Continue creating array?

Adding --force does not avoid this.

I recommend tweaking the partition table to make both members 
equal, but a ‘yes|’ also works if you're in a hurry ;-)


Kind regards,

T G-R


signature.asc
Description: PGP signature


bug#38086: RAID installation script with ‘mdadm’ no longer works

2020-01-18 Thread Ludovic Courtès
Hi!

Vagrant Cascadian  skribis:

> So, this might be sort of a tangent, but I'm wondering why you're
> testing raid0 (striping, for performance+capacity at risk of data loss)
> instead of raid1 (mirroring, for redundancy, fast reads, slow writes,
> half capacity of storage), or another raid level with more disks (raid5,
> raid6, raid10). raid1 would be the simplest to switch the code to, since
> it uses only two disks.

Good point!  I guess it would make sense to test RAID1, indeed.

I gave it a shot with the patch below.  Problem is that installation
seemingly hangs here:

--8<---cut here---start->8---
+ parted --script /dev/vdb mklabel gpt mkpart primary ext2 1M 3M mkpart primary 
ext2 3M 1.4G mkpart primary ext2 1.4G 2.8G set 1 boot on set 1 bios_grub on
+ mdadm --create /dev/md0 --verbose --level=mirror --raid-devices=2 /dev/vdb2 
/dev/vdb3
mdadm: Note: this array has metadata at the start and
may not be suitable as a boot device.  If you plan to
store '/boot' on this device please ensure that
your boot-loader understands md/v1.x metadata, or use
--metadata=0.90
mdadm: size set to 1361920K
mdadm: largest drive (/dev/vdb3) exceeds size (1361920K) by more than 1%
--8<---cut here---end--->8---

As you can see, it’s attempting to make a RAID1 device out of two
partitions (not two disks), which makes no sense in the real world, but
is easier to handle here.  So I wonder if this is what’s causing it to
hang…

Thoughts?

Ludo’.

diff --git a/gnu/tests/install.scm b/gnu/tests/install.scm
index 8842d48df8..12e6eb26df 100644
--- a/gnu/tests/install.scm
+++ b/gnu/tests/install.scm
@@ -546,8 +546,8 @@ where /gnu lives on a separate partition.")
  (target "/dev/vdb")))
 (kernel-arguments '("console=ttyS0"))
 
-;; Add a kernel module for RAID-0 (aka. "stripe").
-(initrd-modules (cons "raid0" %base-initrd-modules))
+;; Add a kernel module for RAID-1 (aka. "mirror").
+(initrd-modules (cons "raid1" %base-initrd-modules))
 
 (mapped-devices (list (mapped-device
(source (list "/dev/vda2" "/dev/vda3"))
@@ -578,11 +578,11 @@ guix --version
 export GUIX_BUILD_OPTIONS=--no-grafts
 parted --script /dev/vdb mklabel gpt \\
   mkpart primary ext2 1M 3M \\
-  mkpart primary ext2 3M 600M \\
-  mkpart primary ext2 600M 1200M \\
+  mkpart primary ext2 3M 1.4G \\
+  mkpart primary ext2 1.4G 2.8G \\
   set 1 boot on \\
   set 1 bios_grub on
-mdadm --create /dev/md0 --verbose --level=stripe --raid-devices=2 \\
+mdadm --create /dev/md0 --verbose --level=mirror --raid-devices=2 \\
   /dev/vdb2 /dev/vdb3
 mkfs.ext4 -L root-fs /dev/md0
 mount /dev/md0 /mnt
@@ -605,7 +605,7 @@ by 'mdadm'.")
%raid-root-os-source
#:script
%raid-root-installation-script
-   #:target-size (* 1300 MiB)))
+   #:target-size (* 2800 MiB)))
  (command (qemu-command/writable-image image)))
   (run-basic-test %raid-root-os
   `(,@command) "raid-root-os")


bug#39172: SElinux guix-daemon.cil file

2020-01-18 Thread Matt Wette

Hi All,

I appologize for the formatting.  I use tbird and I can't find a way to 
do plain-text mode.


I'm trying to get guix-1.0.1 running on Fedora-30 with its default 
SElinux set up.
I found (hint from 
https://lists.gnu.org/archive/html/guix-devel/2019-05/msg00109.html)
that the guix-daemon.cil file seems to be missing a few items. Without 
this patch

    # restorecon -R /gnu/store
fails.

--- guix-daemon.cil.orig    2020-01-18 07:08:12.905986299 -0800
+++ guix-daemon.cil    2020-01-18 07:09:49.765737261 -0800
@@ -34,14 +34,19 @@
   (roletype object_r guix_daemon_t)
   (type guix_daemon_conf_t)
   (roletype object_r guix_daemon_conf_t)
+  (typeattributeset file_type guix_daemon_conf_t)
   (type guix_daemon_exec_t)
   (roletype object_r guix_daemon_exec_t)
+  (typeattributeset file_type guix_daemon_exec_t)
   (type guix_daemon_socket_t)
   (roletype object_r guix_daemon_socket_t)
+  (typeattributeset file_type guix_daemon_socket_t)
   (type guix_store_content_t)
   (roletype object_r guix_store_content_t)
+  (typeattributeset file_type guix_store_content_t)
   (type guix_profiles_t)
   (roletype object_r guix_profiles_t)
+  (typeattributeset file_type guix_profiles_t)

   ;; These types are domains, thereby allowing process rules
   (typeattributeset domain (guix_daemon_t guix_daemon_exec_t))






bug#38086: RAID installation script with ‘mdadm’ no longer works

2020-01-18 Thread Gábor Boskovits
Vagrant Cascadian  ezt írta (időpont: 2020. jan. 17.,
Pén 23:42):

> On 2019-11-12, Ludovic Courtès wrote:
> > Gábor Boskovits  skribis:
> >
> >>> + mdadm --create /dev/md0 --verbose --level=stripe --raid-devices=2
> >>> /dev/vdb2 /dev/vdb3
> >>> mdadm: chunk size defaults to 512K
> >>> mdadm: Defaulting to version 1.2 metadata
> >>> [   13.890586] md/raid0:md0: cannot assemble multi-zone RAID0 with
> >>> default_layout setting
> >>> [   13.894691] md/raid0: please set raid0.default_layout to 1 or 2
> >>> [   13.896000] md: pers->run() failed ...
> >>> mdadm: RUN_ARRAY failed: Unknown error 524
> >>> [   13.901603] md: md0 stopped.
> >>> --8<---cut here---end--->8---
> >>>
> >>> Anyone knows what it takes to “set raid0.default_layout to 1 or 2”?
> >>>
> >>
> >> On kernel 5.3.4 and above the
> >> raid0.default_layout=2 kernel boot paramter should be set. We should
> >> generate our grub configuration accordingly.
>
> So, this might be sort of a tangent, but I'm wondering why you're
> testing raid0 (striping, for performance+capacity at risk of data loss)
> instead of raid1 (mirroring, for redundancy, fast reads, slow writes,
> half capacity of storage), or another raid level with more disks (raid5,
> raid6, raid10). raid1 would be the simplest to switch the code to, since
> it uses only two disks.
>
>
> The issue triggering this bug might be a non-issue on other raid levels
> that in my mind might make more sense for rootfs. Or maybe people have
> use-casese for rootfs on raid0 that I'm too uncreative to think of? :)
>

I often see raid 10 as root. I believe it might make sense to test that
setup.

>
>
> live well,
>   vagrant
>