Hello community,

here is the log from the commit of package yast2-storage-ng for 
openSUSE:Factory checked in at 2018-07-24 17:29:07
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/yast2-storage-ng (Old)
 and      /work/SRC/openSUSE:Factory/.yast2-storage-ng.new (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "yast2-storage-ng"

Tue Jul 24 17:29:07 2018 rev:26 rq:624822 version:4.0.199

Changes:
--------
--- /work/SRC/openSUSE:Factory/yast2-storage-ng/yast2-storage-ng.changes        
2018-07-03 23:32:52.208536323 +0200
+++ /work/SRC/openSUSE:Factory/.yast2-storage-ng.new/yast2-storage-ng.changes   
2018-07-24 17:29:13.259821983 +0200
@@ -1,0 +2,26 @@
+Mon Jul 23 13:55:03 UTC 2018 - [email protected]
+
+- document XEN guest setup for testing (bsc#1085134)
+- 4.0.199
+
+-------------------------------------------------------------------
+Wed Jul 18 19:00:11 UTC 2018 - [email protected]
+
+- Partitioner: when creating a partition, use only regions of
+  the selected type: primary, logical or extended (bsc#1097634).
+- 4.0.198
+
+-------------------------------------------------------------------
+Wed Jul 18 11:38:39 UTC 2018 - [email protected]
+
+- AutoYaST: export BIOS RAID devices correctly (bsc#1098594).
+- 4.0.197
+
+-------------------------------------------------------------------
+Mon Jul 16 16:26:28 UTC 2018 - [email protected]
+
+- AutoYaST: do not crash when reusing partitions on non-disk
+  devices like DASD or BIOS RAID (bsc#1098594).
+- 4.0.196
+
+-------------------------------------------------------------------

Old:
----
  yast2-storage-ng-4.0.195.tar.bz2

New:
----
  yast2-storage-ng-4.0.199.tar.bz2

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ yast2-storage-ng.spec ++++++
--- /var/tmp/diff_new_pack.shr789/_old  2018-07-24 17:29:14.547823614 +0200
+++ /var/tmp/diff_new_pack.shr789/_new  2018-07-24 17:29:14.551823620 +0200
@@ -17,7 +17,7 @@
 
 
 Name:           yast2-storage-ng
-Version:        4.0.195
+Version:        4.0.199
 Release:        0
 
 BuildRoot:      %{_tmppath}/%{name}-%{version}-build

++++++ yast2-storage-ng-4.0.195.tar.bz2 -> yast2-storage-ng-4.0.199.tar.bz2 
++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/yast2-storage-ng-4.0.195/doc/xen-setup.md 
new/yast2-storage-ng-4.0.199/doc/xen-setup.md
--- old/yast2-storage-ng-4.0.195/doc/xen-setup.md       1970-01-01 
01:00:00.000000000 +0100
+++ new/yast2-storage-ng-4.0.199/doc/xen-setup.md       2018-07-23 
16:25:46.000000000 +0200
@@ -0,0 +1,205 @@
+# Setting up XEN for testing
+
+This document describes how to setup a XEN vm within a QEMU vm to test
+special XEN block devices (that are named like partitions but are in fact
+disks).
+
+For this document the XEN host (= QEMU guest) system uses Leap 15.0, the XEN 
guest uses SLE 15. But it
+whould work similar with other SUSE releases.
+
+## Preparing the QEMU guest (XEN host) vm
+
+If you already have a Leap 15.0 vm, use it.
+
+Else create a new QEMU vm and install Leap 15.0, but:
+
+- add the Leap online repositories (the DVD image does not have XEN tools)
+- select `server` role
+- in the software selection, add the `XEN Virtualization Host and tools` 
pattern
+
+**Note**
+
+> It should be sufficient to add the `xen` and `xen-tools` packages to a 
standard Leap.
+
+Now reboot the QEMU vm and select `openSUSE Leap 15.0, with Xen hypervisor` at 
the boot menu.
+
+To communicate with the XEN vm you'll need a bridge device. Create a config 
like this in your QEMU vm
+
+```sh
+vm8101:/etc/sysconfig/network # cat ifcfg-br0
+STARTMODE='auto'
+BOOTPROTO='dhcp'
+BRIDGE='yes'
+BRIDGE_PORTS='eth0'
+```
+
+and run
+
+```sh
+wicked ifup br0
+```
+
+## Preparing the XEN guest vm
+
+> All the commands below are run inside the QEMU host vm.
+
+We'll need something to run inside the XEN vm. For this document SLE 15 is 
used (because it's comparatively small).
+
+Get `SLE-15-Installer-DVD-x86_64-GMC-DVD1.iso` and put it inside the QEMU vm, 
say as `/data/sle15.iso`
+
+```sh
+vm8101:/data # ls -l
+total 650240
+-rw-r--r-- 1 root root 665845760 Jul 19 14:25 sle15.iso
+```
+
+Mount it and extract kernel and initrd
+
+```sh
+vm8101:/data # mount -oloop,ro sle15.iso /mnt/
+vm8101:/data # cp /mnt/boot/x86_64/loader/{linux,initrd} .
+vm8101:/data # ls -l
+total 731160
+-r--r--r-- 1 root root  75971284 Jul 19 14:26 initrd
+-r--r--r-- 1 root root   6885728 Jul 19 14:26 linux
+-rw-r--r-- 1 root root 665845760 Jul 19 14:25 sle15.iso
+vm8101:/data # umount /mnt
+```
+
+You can use real devices or plain files to map into the XEN guest. For our 
example we'll try both.
+Let's get an empty file
+
+```sh
+vm8101:/data # dd if=/dev/zero of=disk1 bs=1G count=0 seek=60
+```
+
+The QEMU vm has a disk device for tests with two partitions
+
+```sh
+vm8101:~ # lsblk /dev/sdb
+NAME   MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
+sdb      8:16   0  60G  0 disk
+|-sdb1   8:17   0   6G  0 part
+`-sdb2   8:18   0   6G  0 part
+```
+
+A XEN guest vm is defined with a simple config file. For our guest it looks 
like
+
+```sh
+vm8101:/data # cat sle15.cfg
+name = "sle15"
+type = "pv"
+kernel = "/data/linux"
+ramdisk = "/data/initrd"
+cmdline = "startshell=1 sshd=1 password=xxxxx vnc=1 vncpassword=xxxxxxxx"
+memory = 512
+vif = [ '' ]
+disk = [ '/data/sle15.iso,,xvda,cdrom', '/dev/sdb2,,xvdb', 
'/data/disk1,,xvdc3' ]
+```
+
+This is a paravirtualized guest (full virtualization within another vm is a 
bit problematic) with
+our SLE 15 iso as CD-ROM and two disk devices. One maps `/dev/sdb2` as full 
disk device to `/dev/xvdb`.
+The other maps `/data/disk1` to `/dev/xvdc3`.
+
+Note that you are relatively free to name the device you map to (it doesn't 
have to start with `xvdc1`, for example).
+If the device name ends with a number the guest kernel will not try to read 
the partition table of the device.
+
+The `vif` line will create a network interface (`eth0`) for us.
+
+There are several options on how to interact with yast during the installation
+
+1. Run yast in ncurses mode. For this use
+  ```sh
+  cmdline = "startshell=1 sshd=1 password=xxxxx"
+  ```
+
+2. Run yast via VNC. For this use
+  ```sh
+  cmdline = "startshell=1 sshd=1 password=xxxxx vnc=1 vncpassword=xxxxxxxx"
+  ```
+  Note: the VNC password must be at least 8 chars long.
+
+3. Run yast via SSH. For this use
+
+  ```sh
+  cmdline = "startshell=1 ssh=1 password=xxxxx"
+  ```
+
+## Starting the XEN guest
+
+Let's get going
+
+```sh
+xl create -c /data/sle15.cfg
+```
+
+With the config above this gets the installation system up and running and 
leaves you at a shell prompt
+
+```sh
+Starting SSH daemon... ok
+IP addresses:
+  10.0.2.18
+  fec0::216:3eff:fe58:3f40
+
+ATTENTION: Starting shell... (use 'exit' to proceed with installation)
+console:vm9650:/ #
+```
+
+There you can run yast (option 1. above) repeatedly in ncurses mode.
+
+With option 2. after running `yast`, connect to the VNC server. E.g.:
+
+```sh
+vncviewer 10.0.2.18:1
+```
+
+With option 3., connect to the XEN guest vm to run the installation
+
+```sh
+ssh -X 10.0.2.18
+```
+
+and run `yast` there.
+
+The disk layout of our XEN guest looks like this:
+
+```sh
+console:vm9650:/ # lsblk -e 7
+NAME    MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
+xvda    202:0    0   635M  1 disk
+|-xvda1 202:1    0   3.8M  1 part
+`-xvda2 202:2    0 630.9M  1 part /var/adm/mount
+xvdb    202:16   0     6G  0 disk
+xvdc3   202:35   0    60G  0 disk
+console:vm9650:/ # cat /sys/block/xvdb/range
+16
+console:vm9650:/ # cat /sys/block/xvdc3/range
+1
+```
+
+Note that `parted` works just fine with `/dev/xvdc3`.
+And if `/dev/xvdc3` happens to contain a partition table (here it does) you 
can run `kpartx` to access them
+
+```sh
+console:vm9650:/ # kpartx -a /dev/xvdc3
+console:vm9650:/ # lsblk -e 7
+NAME      MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
+xvda      202:0    0   635M  1 disk
+|-xvda1   202:1    0   3.8M  1 part
+`-xvda2   202:2    0 630.9M  1 part /var/adm/mount
+xvdb      202:16   0     6G  0 disk
+xvdc3     202:35   0    60G  0 disk
+|-xvdc3p1 254:0    0     6G  0 part
+`-xvdc3p2 254:1    0     6G  0 part
+```
+
+
+## Stopping the XEN guest
+
+Either do `halt -fp` within the XEN guest or `xl destroy sle15` on the XEN 
host.
+
+
+## Further reading
+
+If you want to extend the XEN configuration have a look at the man pages in 
the `xen-tools` package.
+For example `xl.cfg(5)` and `xl-disk-configuration(5)`.
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/yast2-storage-ng-4.0.195/package/yast2-storage-ng.changes 
new/yast2-storage-ng-4.0.199/package/yast2-storage-ng.changes
--- old/yast2-storage-ng-4.0.195/package/yast2-storage-ng.changes       
2018-07-02 18:03:16.000000000 +0200
+++ new/yast2-storage-ng-4.0.199/package/yast2-storage-ng.changes       
2018-07-23 16:25:46.000000000 +0200
@@ -1,4 +1,30 @@
 -------------------------------------------------------------------
+Mon Jul 23 13:55:03 UTC 2018 - [email protected]
+
+- document XEN guest setup for testing (bsc#1085134)
+- 4.0.199
+
+-------------------------------------------------------------------
+Wed Jul 18 19:00:11 UTC 2018 - [email protected]
+
+- Partitioner: when creating a partition, use only regions of
+  the selected type: primary, logical or extended (bsc#1097634).
+- 4.0.198
+
+-------------------------------------------------------------------
+Wed Jul 18 11:38:39 UTC 2018 - [email protected]
+
+- AutoYaST: export BIOS RAID devices correctly (bsc#1098594).
+- 4.0.197
+
+-------------------------------------------------------------------
+Mon Jul 16 16:26:28 UTC 2018 - [email protected]
+
+- AutoYaST: do not crash when reusing partitions on non-disk
+  devices like DASD or BIOS RAID (bsc#1098594).
+- 4.0.196
+
+-------------------------------------------------------------------
 Thu Jun 28 16:04:56 CEST 2018 - [email protected]
 
 - Added additional searchkeys to desktop file (fate#321043).
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/yast2-storage-ng-4.0.195/package/yast2-storage-ng.spec 
new/yast2-storage-ng-4.0.199/package/yast2-storage-ng.spec
--- old/yast2-storage-ng-4.0.195/package/yast2-storage-ng.spec  2018-07-02 
18:03:16.000000000 +0200
+++ new/yast2-storage-ng-4.0.199/package/yast2-storage-ng.spec  2018-07-23 
16:25:46.000000000 +0200
@@ -16,7 +16,7 @@
 #
 
 Name:          yast2-storage-ng
-Version:        4.0.195
+Version:        4.0.199
 Release:       0
 
 BuildRoot:     %{_tmppath}/%{name}-%{version}-build
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/yast2-storage-ng-4.0.195/src/lib/y2partitioner/dialogs/partition_size.rb 
new/yast2-storage-ng-4.0.199/src/lib/y2partitioner/dialogs/partition_size.rb
--- 
old/yast2-storage-ng-4.0.195/src/lib/y2partitioner/dialogs/partition_size.rb    
    2018-07-02 18:03:16.000000000 +0200
+++ 
new/yast2-storage-ng-4.0.199/src/lib/y2partitioner/dialogs/partition_size.rb    
    2018-07-23 16:25:46.000000000 +0200
@@ -42,9 +42,9 @@
         textdomain "storage"
         @disk_name = controller.disk_name
         @controller = controller
-        # FIXME: the available regions should be filtered based on 
controller.type
-        @regions = controller.unused_slots.map(&:region)
-        @optimal_regions = controller.unused_optimal_slots.map(&:region)
+        type = controller.type
+        @regions = controller.unused_slots.select { |s| s.possible?(type) 
}.map(&:region)
+        @optimal_regions = controller.unused_optimal_slots.select { |s| 
s.possible?(type) }.map(&:region)
 
         raise ArgumentError, "No region to make a partition in" if 
@optimal_regions.empty?
       end
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/yast2-storage-ng-4.0.195/src/lib/y2storage/autoinst_profile/drive_section.rb
 
new/yast2-storage-ng-4.0.199/src/lib/y2storage/autoinst_profile/drive_section.rb
--- 
old/yast2-storage-ng-4.0.195/src/lib/y2storage/autoinst_profile/drive_section.rb
    2018-07-02 18:03:16.000000000 +0200
+++ 
new/yast2-storage-ng-4.0.199/src/lib/y2storage/autoinst_profile/drive_section.rb
    2018-07-23 16:25:46.000000000 +0200
@@ -151,7 +151,7 @@
       #   <drive> section, like a disk, a DASD or an LVM volume group.
       # @return [Boolean] if attributes were successfully read; false 
otherwise.
       def init_from_device(device)
-        if device.is?(:md)
+        if device.is?(:software_raid)
           init_from_md(device)
         elsif device.is?(:lvm_vg)
           init_from_vg(device)
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/yast2-storage-ng-4.0.195/src/lib/y2storage/autoinst_profile/partitioning_section.rb
 
new/yast2-storage-ng-4.0.199/src/lib/y2storage/autoinst_profile/partitioning_section.rb
--- 
old/yast2-storage-ng-4.0.195/src/lib/y2storage/autoinst_profile/partitioning_section.rb
     2018-07-02 18:03:16.000000000 +0200
+++ 
new/yast2-storage-ng-4.0.199/src/lib/y2storage/autoinst_profile/partitioning_section.rb
     2018-07-23 16:25:46.000000000 +0200
@@ -83,7 +83,7 @@
       def self.new_from_storage(devicegraph)
         result = new
         # TODO: consider also NFS and TMPFS
-        devices = devicegraph.md_raids + devicegraph.lvm_vgs + 
devicegraph.disk_devices
+        devices = devicegraph.software_raids + devicegraph.lvm_vgs + 
devicegraph.disk_devices
         result.drives = devices.each_with_object([]) do |dev, array|
           drive = DriveSection.new_from_storage(dev)
           array << drive if drive
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/yast2-storage-ng-4.0.195/src/lib/y2storage/proposal/autoinst_space_maker.rb 
new/yast2-storage-ng-4.0.199/src/lib/y2storage/proposal/autoinst_space_maker.rb
--- 
old/yast2-storage-ng-4.0.195/src/lib/y2storage/proposal/autoinst_space_maker.rb 
    2018-07-02 18:03:16.000000000 +0200
+++ 
new/yast2-storage-ng-4.0.199/src/lib/y2storage/proposal/autoinst_space_maker.rb 
    2018-07-23 16:25:46.000000000 +0200
@@ -169,7 +169,7 @@
       # @return [Hash<String,Array<String>>] disk name to list of reused 
partitions map
       def reused_partitions_by_disk(devicegraph, planned_devices)
         find_reused_partitions(devicegraph, 
planned_devices).each_with_object({}) do |part, map|
-          disk_name = part.disk.name
+          disk_name = part.partitionable.name
           map[disk_name] ||= []
           map[disk_name] << part.name
         end
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/yast2-storage-ng-4.0.195/test/data/devicegraphs/bug_1098594.xml 
new/yast2-storage-ng-4.0.199/test/data/devicegraphs/bug_1098594.xml
--- old/yast2-storage-ng-4.0.195/test/data/devicegraphs/bug_1098594.xml 
1970-01-01 01:00:00.000000000 +0100
+++ new/yast2-storage-ng-4.0.199/test/data/devicegraphs/bug_1098594.xml 
2018-07-23 16:25:46.000000000 +0200
@@ -0,0 +1,384 @@
+<?xml version="1.0"?>
+<!-- generated by libstorage-ng version 3.3.259, ls3084, 2018-06-07 11:58:46 
GMT -->
+<Devicegraph>
+  <Devices>
+    <Disk>
+      <sid>42</sid>
+      <name>/dev/sdb</name>
+      <sysfs-name>sdb</sysfs-name>
+      
<sysfs-path>/devices/pci0000:00/0000:00:1f.2/ata2/host1/target1:0:0/1:0:0:0/block/sdb</sysfs-path>
+      <region>
+        <length>781422768</length>
+        <block-size>512</block-size>
+      </region>
+      <udev-path>pci-0000:00:1f.2-ata-2</udev-path>
+      <udev-id>ata-INTEL_SSDSC1NB400G4_BTWL445204M5400JGN</udev-id>
+      <udev-id>scsi-0ATA_INTEL_SSDSC1NB40_BTWL445204M5400JGN</udev-id>
+      <udev-id>scsi-1ATA_INTEL_SSDSC1NB400G4_BTWL445204M5400JGN</udev-id>
+      <udev-id>scsi-355cd2e404b71151c</udev-id>
+      <udev-id>scsi-SATA_INTEL_SSDSC1NB40_BTWL445204M5400JGN</udev-id>
+      <udev-id>wwn-0x55cd2e404b71151c</udev-id>
+      <topology/>
+      <range>256</range>
+      <transport>SATA</transport>
+    </Disk>
+    <Disk>
+      <sid>43</sid>
+      <name>/dev/sda</name>
+      <sysfs-name>sda</sysfs-name>
+      
<sysfs-path>/devices/pci0000:00/0000:00:1f.2/ata1/host0/target0:0:0/0:0:0:0/block/sda</sysfs-path>
+      <region>
+        <length>781422768</length>
+        <block-size>512</block-size>
+      </region>
+      <udev-path>pci-0000:00:1f.2-ata-1</udev-path>
+      <udev-id>ata-INTEL_SSDSC1NB400G4_BTWL445204HM400JGN</udev-id>
+      <udev-id>scsi-0ATA_INTEL_SSDSC1NB40_BTWL445204HM400JGN</udev-id>
+      <udev-id>scsi-1ATA_INTEL_SSDSC1NB400G4_BTWL445204HM400JGN</udev-id>
+      <udev-id>scsi-355cd2e404b7114a5</udev-id>
+      <udev-id>scsi-SATA_INTEL_SSDSC1NB40_BTWL445204HM400JGN</udev-id>
+      <udev-id>wwn-0x55cd2e404b7114a5</udev-id>
+      <topology/>
+      <range>256</range>
+      <transport>SATA</transport>
+    </Disk>
+    <MdContainer>
+      <sid>44</sid>
+      <name>/dev/md/imsm0</name>
+      <sysfs-name>md127</sysfs-name>
+      <sysfs-path>/devices/virtual/block/md127</sysfs-path>
+      <region>
+        <length>0</length>
+        <block-size>512</block-size>
+      </region>
+      <udev-id>md-uuid-23500775:54116ee9:5fa9b47e:82d47c4a</udev-id>
+      <topology/>
+      <range>256</range>
+      <md-level>CONTAINER</md-level>
+      <uuid>23500775:54116ee9:5fa9b47e:82d47c4a</uuid>
+      <metadata>imsm</metadata>
+      <in-etc-mdadm>false</in-etc-mdadm>
+    </MdContainer>
+    <MdMember>
+      <sid>45</sid>
+      <name>/dev/md/Volume0_0</name>
+      <sysfs-name>md126</sysfs-name>
+      <sysfs-path>/devices/virtual/block/md126</sysfs-path>
+      <region>
+        <length>742344704</length>
+        <block-size>512</block-size>
+      </region>
+      <udev-id>md-uuid-f2f5f05a:442a73ad:520c3c94:b974f729</udev-id>
+      <topology/>
+      <range>256</range>
+      <md-level>RAID1</md-level>
+      <uuid>f2f5f05a:442a73ad:520c3c94:b974f729</uuid>
+      <in-etc-mdadm>false</in-etc-mdadm>
+    </MdMember>
+    <Gpt>
+      <sid>46</sid>
+    </Gpt>
+    <Partition>
+      <sid>47</sid>
+      <name>/dev/md/Volume0_0p1</name>
+      <sysfs-name>md126p1</sysfs-name>
+      <sysfs-path>/devices/virtual/block/md126/md126p1</sysfs-path>
+      <region>
+        <start>2048</start>
+        <length>1021952</length>
+        <block-size>512</block-size>
+      </region>
+      <udev-id>md-uuid-f2f5f05a:442a73ad:520c3c94:b974f729-part1</udev-id>
+      <type>primary</type>
+      <id>239</id>
+    </Partition>
+    <Partition>
+      <sid>48</sid>
+      <name>/dev/md/Volume0_0p2</name>
+      <sysfs-name>md126p2</sysfs-name>
+      <sysfs-path>/devices/virtual/block/md126/md126p2</sysfs-path>
+      <region>
+        <start>1024000</start>
+        <length>102400000</length>
+        <block-size>512</block-size>
+      </region>
+      <udev-id>md-uuid-f2f5f05a:442a73ad:520c3c94:b974f729-part2</udev-id>
+      <type>primary</type>
+      <id>258</id>
+    </Partition>
+    <Partition>
+      <sid>49</sid>
+      <name>/dev/md/Volume0_0p3</name>
+      <sysfs-name>md126p3</sysfs-name>
+      <sysfs-path>/devices/virtual/block/md126/md126p3</sysfs-path>
+      <region>
+        <start>103424000</start>
+        <length>1024000</length>
+        <block-size>512</block-size>
+      </region>
+      <udev-id>md-uuid-f2f5f05a:442a73ad:520c3c94:b974f729-part3</udev-id>
+      <type>primary</type>
+      <id>258</id>
+    </Partition>
+    <Partition>
+      <sid>50</sid>
+      <name>/dev/md/Volume0_0p4</name>
+      <sysfs-name>md126p4</sysfs-name>
+      <sysfs-path>/devices/virtual/block/md126/md126p4</sysfs-path>
+      <region>
+        <start>104448000</start>
+        <length>102400000</length>
+        <block-size>512</block-size>
+      </region>
+      <udev-id>md-uuid-f2f5f05a:442a73ad:520c3c94:b974f729-part4</udev-id>
+      <type>primary</type>
+      <id>131</id>
+    </Partition>
+    <Partition>
+      <sid>51</sid>
+      <name>/dev/md/Volume0_0p5</name>
+      <sysfs-name>md126p5</sysfs-name>
+      <sysfs-path>/devices/virtual/block/md126/md126p5</sysfs-path>
+      <region>
+        <start>206848000</start>
+        <length>1024000</length>
+        <block-size>512</block-size>
+      </region>
+      <udev-id>md-uuid-f2f5f05a:442a73ad:520c3c94:b974f729-part5</udev-id>
+      <type>primary</type>
+      <id>239</id>
+    </Partition>
+    <Partition>
+      <sid>52</sid>
+      <name>/dev/md/Volume0_0p6</name>
+      <sysfs-name>md126p6</sysfs-name>
+      <sysfs-path>/devices/virtual/block/md126/md126p6</sysfs-path>
+      <region>
+        <start>207872000</start>
+        <length>102400000</length>
+        <block-size>512</block-size>
+      </region>
+      <udev-id>md-uuid-f2f5f05a:442a73ad:520c3c94:b974f729-part6</udev-id>
+      <type>primary</type>
+      <id>258</id>
+    </Partition>
+    <Partition>
+      <sid>53</sid>
+      <name>/dev/md/Volume0_0p7</name>
+      <sysfs-name>md126p7</sysfs-name>
+      <sysfs-path>/devices/virtual/block/md126/md126p7</sysfs-path>
+      <region>
+        <start>310272000</start>
+        <length>1024000</length>
+        <block-size>512</block-size>
+      </region>
+      <udev-id>md-uuid-f2f5f05a:442a73ad:520c3c94:b974f729-part7</udev-id>
+      <type>primary</type>
+      <id>239</id>
+    </Partition>
+    <Partition>
+      <sid>54</sid>
+      <name>/dev/md/Volume0_0p8</name>
+      <sysfs-name>md126p8</sysfs-name>
+      <sysfs-path>/devices/virtual/block/md126/md126p8</sysfs-path>
+      <region>
+        <start>311296000</start>
+        <length>102400000</length>
+        <block-size>512</block-size>
+      </region>
+      <udev-id>md-uuid-f2f5f05a:442a73ad:520c3c94:b974f729-part8</udev-id>
+      <type>primary</type>
+      <id>258</id>
+    </Partition>
+    <Partition>
+      <sid>55</sid>
+      <name>/dev/md/Volume0_0p9</name>
+      <sysfs-name>md126p9</sysfs-name>
+      <sysfs-path>/devices/virtual/block/md126/md126p9</sysfs-path>
+      <region>
+        <start>413696000</start>
+        <length>40960000</length>
+        <block-size>512</block-size>
+      </region>
+      <udev-id>md-uuid-f2f5f05a:442a73ad:520c3c94:b974f729-part9</udev-id>
+      <type>primary</type>
+      <id>130</id>
+    </Partition>
+    <Partition>
+      <sid>56</sid>
+      <name>/dev/md/Volume0_0p10</name>
+      <sysfs-name>md126p10</sysfs-name>
+      <sysfs-path>/devices/virtual/block/md126/md126p10</sysfs-path>
+      <region>
+        <start>454656000</start>
+        <length>287686656</length>
+        <block-size>512</block-size>
+      </region>
+      <udev-id>md-uuid-f2f5f05a:442a73ad:520c3c94:b974f729-part10</udev-id>
+      <type>primary</type>
+      <id>258</id>
+    </Partition>
+    <Vfat>
+      <sid>57</sid>
+      <label>sgiboot</label>
+      <uuid>A886-F248</uuid>
+    </Vfat>
+    <Xfs>
+      <sid>58</sid>
+      <uuid>59650973-f777-4a40-8da8-8fb862061c6c</uuid>
+    </Xfs>
+    <Vfat>
+      <sid>59</sid>
+      <label>sle15-boot</label>
+      <uuid>154C-4090</uuid>
+    </Vfat>
+    <Xfs>
+      <sid>60</sid>
+      <label>sle15-root</label>
+      <uuid>bbc34337-1506-49dd-8c24-860ad0172127</uuid>
+    </Xfs>
+    <Vfat>
+      <sid>61</sid>
+      <uuid>C8FC-FD55</uuid>
+    </Vfat>
+    <Xfs>
+      <sid>62</sid>
+      <label>rhel7u5-root</label>
+      <uuid>e9bbe9b9-5168-401b-8f8f-2282d4007f75</uuid>
+    </Xfs>
+    <Swap>
+      <sid>63</sid>
+      <uuid>dbda4d7a-d65f-4640-a2cb-47dbb3d42347</uuid>
+    </Swap>
+    <Xfs>
+      <sid>64</sid>
+      <label>local-shared</label>
+      <uuid>5e597244-0c08-4ce3-abb9-8f3f0dd0a35c</uuid>
+    </Xfs>
+    <Nfs>
+      <sid>65</sid>
+      <SpaceInfo>
+        <size>1073217536000</size>
+        <used>873507323904</used>
+      </SpaceInfo>
+      <server>nfsserver.domain.corp</server>
+      <path>/export/suse/sles/15.0/x86_64</path>
+    </Nfs>
+    <MountPoint>
+      <sid>66</sid>
+      <path>/mounts/mp_0000</path>
+      <mount-by>device</mount-by>
+      
<mount-options>rw,relatime,vers=3,rsize=524288,wsize=524288,namlen=255,hard,nolock,proto=tcp,timeo=600,retrans=2,sec=sys,mountaddr=10.17.214.130,mountvers=3,mountport=20048,mountproto=udp,local_lock=all,addr=10.17.214.130</mount-options>
+      <mount-type>nfs</mount-type>
+      <active>true</active>
+      <in-etc-fstab>false</in-etc-fstab>
+      <freq>0</freq>
+      <passno>0</passno>
+    </MountPoint>
+  </Devices>
+  <Holders>
+    <MdUser>
+      <source-sid>43</source-sid>
+      <target-sid>44</target-sid>
+      <spare>true</spare>
+    </MdUser>
+    <MdUser>
+      <source-sid>42</source-sid>
+      <target-sid>44</target-sid>
+      <spare>true</spare>
+    </MdUser>
+    <MdUser>
+      <source-sid>43</source-sid>
+      <target-sid>45</target-sid>
+    </MdUser>
+    <MdUser>
+      <source-sid>42</source-sid>
+      <target-sid>45</target-sid>
+    </MdUser>
+    <MdSubdevice>
+      <source-sid>44</source-sid>
+      <target-sid>45</target-sid>
+      <member>0</member>
+    </MdSubdevice>
+    <User>
+      <source-sid>45</source-sid>
+      <target-sid>46</target-sid>
+    </User>
+    <Subdevice>
+      <source-sid>46</source-sid>
+      <target-sid>47</target-sid>
+    </Subdevice>
+    <Subdevice>
+      <source-sid>46</source-sid>
+      <target-sid>48</target-sid>
+    </Subdevice>
+    <Subdevice>
+      <source-sid>46</source-sid>
+      <target-sid>49</target-sid>
+    </Subdevice>
+    <Subdevice>
+      <source-sid>46</source-sid>
+      <target-sid>50</target-sid>
+    </Subdevice>
+    <Subdevice>
+      <source-sid>46</source-sid>
+      <target-sid>51</target-sid>
+    </Subdevice>
+    <Subdevice>
+      <source-sid>46</source-sid>
+      <target-sid>52</target-sid>
+    </Subdevice>
+    <Subdevice>
+      <source-sid>46</source-sid>
+      <target-sid>53</target-sid>
+    </Subdevice>
+    <Subdevice>
+      <source-sid>46</source-sid>
+      <target-sid>54</target-sid>
+    </Subdevice>
+    <Subdevice>
+      <source-sid>46</source-sid>
+      <target-sid>55</target-sid>
+    </Subdevice>
+    <Subdevice>
+      <source-sid>46</source-sid>
+      <target-sid>56</target-sid>
+    </Subdevice>
+    <FilesystemUser>
+      <source-sid>47</source-sid>
+      <target-sid>57</target-sid>
+    </FilesystemUser>
+    <FilesystemUser>
+      <source-sid>48</source-sid>
+      <target-sid>58</target-sid>
+    </FilesystemUser>
+    <FilesystemUser>
+      <source-sid>49</source-sid>
+      <target-sid>59</target-sid>
+    </FilesystemUser>
+    <FilesystemUser>
+      <source-sid>50</source-sid>
+      <target-sid>60</target-sid>
+    </FilesystemUser>
+    <FilesystemUser>
+      <source-sid>53</source-sid>
+      <target-sid>61</target-sid>
+    </FilesystemUser>
+    <FilesystemUser>
+      <source-sid>54</source-sid>
+      <target-sid>62</target-sid>
+    </FilesystemUser>
+    <FilesystemUser>
+      <source-sid>55</source-sid>
+      <target-sid>63</target-sid>
+    </FilesystemUser>
+    <FilesystemUser>
+      <source-sid>56</source-sid>
+      <target-sid>64</target-sid>
+    </FilesystemUser>
+    <User>
+      <source-sid>65</source-sid>
+      <target-sid>66</target-sid>
+    </User>
+  </Holders>
+</Devicegraph>
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/yast2-storage-ng-4.0.195/test/y2partitioner/clients/main_test.rb 
new/yast2-storage-ng-4.0.199/test/y2partitioner/clients/main_test.rb
--- old/yast2-storage-ng-4.0.195/test/y2partitioner/clients/main_test.rb        
2018-07-02 18:03:16.000000000 +0200
+++ new/yast2-storage-ng-4.0.199/test/y2partitioner/clients/main_test.rb        
2018-07-23 16:25:46.000000000 +0200
@@ -116,6 +116,11 @@
 
           before do
             allow(partitioner_dialog).to 
receive(:device_graph).and_return(device_graph)
+
+            allow(Yast::Execute).to receive(:locally!)
+              .with("/sbin/udevadm", any_args)
+            allow(Yast::Execute).to receive(:locally!)
+              .with("/usr/lib/YaST2/bin/mask-systemd-units", any_args)
           end
 
           let(:device_graph) { instance_double(Y2Storage::Devicegraph) }
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/yast2-storage-ng-4.0.195/test/y2partitioner/dialogs/partition_size_test.rb 
new/yast2-storage-ng-4.0.199/test/y2partitioner/dialogs/partition_size_test.rb
--- 
old/yast2-storage-ng-4.0.195/test/y2partitioner/dialogs/partition_size_test.rb  
    2018-07-02 18:03:16.000000000 +0200
+++ 
new/yast2-storage-ng-4.0.199/test/y2partitioner/dialogs/partition_size_test.rb  
    2018-07-23 16:25:46.000000000 +0200
@@ -31,26 +31,70 @@
 
   let(:controller) do
     pt = Y2Partitioner::Actions::Controllers::Partition.new(disk)
-    pt.region = region
     pt.custom_size = Y2Storage::DiskSize.MiB(1)
+    pt.type = partition_type
     pt
   end
   let(:disk) { "/dev/sda" }
-  let(:region) { Y2Storage::Region.create(2000, 1000, 
Y2Storage::DiskSize.new(1500)) }
-  let(:slot) { double("PartitionSlot", region: region) }
-  let(:regions) { [region] }
-  let(:optimal_regions) { [region] }
+
+  let(:region_prim1) { Y2Storage::Region.create(2000, 1000, 
Y2Storage::DiskSize.new(1500)) }
+  let(:region_log) { Y2Storage::Region.create(3001, 1000, 
Y2Storage::DiskSize.new(1500)) }
+  let(:region_prim2) { Y2Storage::Region.create(4001, 1000, 
Y2Storage::DiskSize.new(1500)) }
+  let(:slot_prim1) { double("PartitionSlot", region: region_prim1) }
+  let(:slot_log) { double("PartitionSlot", region: region_log) }
+  let(:slot_prim2) { double("PartitionSlot", region: region_prim2) }
+
+  let(:partition_type) { Y2Storage::PartitionType::LOGICAL }
+  let(:regions) { [region_log] }
+  let(:optimal_regions) { [region_log] }
+
+  before do
+    allow(slot_prim1).to receive(:possible?) do |type|
+      type != Y2Storage::PartitionType::LOGICAL
+    end
+    allow(slot_prim2).to receive(:possible?) do |type|
+      type != Y2Storage::PartitionType::LOGICAL
+    end
+    allow(slot_log).to receive(:possible?) do |type|
+      type == Y2Storage::PartitionType::LOGICAL
+    end
+  end
 
   describe Y2Partitioner::Dialogs::PartitionSize do
-    subject { described_class.new(controller) }
+    subject(:dialog) { described_class.new(controller) }
 
     before do
       allow(Y2Partitioner::Dialogs::PartitionSize::SizeWidget)
         .to receive(:new).and_return(term(:Empty))
-      allow(controller).to receive(:unused_slots).and_return [slot]
-      allow(controller).to receive(:unused_optimal_slots).and_return [slot]
+      allow(controller).to receive(:unused_slots).and_return [slot_prim1, 
slot_log, slot_prim2]
+      allow(controller).to receive(:unused_optimal_slots).and_return 
[slot_prim1, slot_log, slot_prim2]
     end
+
     include_examples "CWM::Dialog"
+
+    describe "#content" do
+      context "when creating a primary partition" do
+        let(:partition_type) { Y2Storage::PartitionType::PRIMARY }
+
+        it "offers only the regions of the primary slots" do
+          expect(Y2Partitioner::Dialogs::PartitionSize::SizeWidget).to 
receive(:new)
+            .with(controller, [region_prim1, region_prim2], [region_prim1, 
region_prim2])
+
+          dialog.contents
+        end
+      end
+
+      context "when creating a logical partition" do
+        let(:partition_type) { Y2Storage::PartitionType::LOGICAL }
+
+        it "offers only the region of the logical slot" do
+          expect(Y2Partitioner::Dialogs::PartitionSize::SizeWidget).to 
receive(:new)
+            .with(controller, [region_log], [region_log])
+
+          dialog.contents
+        end
+      end
+    end
   end
 
   describe Y2Partitioner::Dialogs::PartitionSize::SizeWidget do
@@ -170,7 +214,7 @@
     let(:entered_start) { 2200 }
     let(:entered_end) { 2500 }
 
-    subject { described_class.new(controller, regions, region) }
+    subject { described_class.new(controller, regions, region_log) }
 
     include_examples "CWM::CustomWidget"
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/yast2-storage-ng-4.0.195/test/y2storage/autoinst_profile/partitioning_section_test.rb
 
new/yast2-storage-ng-4.0.199/test/y2storage/autoinst_profile/partitioning_section_test.rb
--- 
old/yast2-storage-ng-4.0.195/test/y2storage/autoinst_profile/partitioning_section_test.rb
   2018-07-02 18:03:16.000000000 +0200
+++ 
new/yast2-storage-ng-4.0.199/test/y2storage/autoinst_profile/partitioning_section_test.rb
   2018-07-23 16:25:46.000000000 +0200
@@ -62,42 +62,61 @@
   end
 
   describe ".new_from_storage" do
-    let(:devicegraph) do
-      instance_double(
-        Y2Storage::Devicegraph, disk_devices: disks, lvm_vgs: [vg], md_raids: 
[md]
-      )
-    end
-    let(:disks) { [disk, dasd] }
-    let(:disk) { instance_double(Y2Storage::Disk) }
-    let(:dasd) { instance_double(Y2Storage::Dasd) }
-    let(:vg) { instance_double(Y2Storage::LvmVg) }
-    let(:md) { instance_double(Y2Storage::Md) }
-
-    before do
-      allow(Y2Storage::AutoinstProfile::DriveSection).to 
receive(:new_from_storage)
-        .with(disk).and_return(disk_section)
-      allow(Y2Storage::AutoinstProfile::DriveSection).to 
receive(:new_from_storage)
-        .with(dasd).and_return(dasd_section)
-      allow(Y2Storage::AutoinstProfile::DriveSection).to 
receive(:new_from_storage)
-        .with(vg).and_return(vg_section)
-      allow(Y2Storage::AutoinstProfile::DriveSection).to 
receive(:new_from_storage)
-        .with(md).and_return(md_section)
-    end
-
-    it "returns a new PartitioningSection object" do
-      expect(described_class.new_from_storage(devicegraph)).to 
be_a(described_class)
-    end
-
-    it "creates an entry in #drives for every relevant VG, disk and DASD" do
-      section = described_class.new_from_storage(devicegraph)
-      expect(section.drives).to eq([md_section, vg_section, disk_section, 
dasd_section])
+    describe "using doubles for the devicegraph and the subsections" do
+      let(:devicegraph) do
+        instance_double(
+          Y2Storage::Devicegraph, disk_devices: disks, lvm_vgs: [vg], 
software_raids: [md]
+        )
+      end
+      let(:disks) { [disk, dasd] }
+      let(:disk) { instance_double(Y2Storage::Disk) }
+      let(:dasd) { instance_double(Y2Storage::Dasd) }
+      let(:vg) { instance_double(Y2Storage::LvmVg) }
+      let(:md) { instance_double(Y2Storage::Md) }
+
+      before do
+        allow(Y2Storage::AutoinstProfile::DriveSection).to 
receive(:new_from_storage)
+          .with(disk).and_return(disk_section)
+        allow(Y2Storage::AutoinstProfile::DriveSection).to 
receive(:new_from_storage)
+          .with(dasd).and_return(dasd_section)
+        allow(Y2Storage::AutoinstProfile::DriveSection).to 
receive(:new_from_storage)
+          .with(vg).and_return(vg_section)
+        allow(Y2Storage::AutoinstProfile::DriveSection).to 
receive(:new_from_storage)
+          .with(md).and_return(md_section)
+      end
+
+      it "returns a new PartitioningSection object" do
+        expect(described_class.new_from_storage(devicegraph)).to 
be_a(described_class)
+      end
+
+      it "creates an entry in #drives for every relevant VG, disk and DASD" do
+        section = described_class.new_from_storage(devicegraph)
+        expect(section.drives).to eq([md_section, vg_section, disk_section, 
dasd_section])
+      end
+
+      it "ignores irrelevant drives" do
+        allow(Y2Storage::AutoinstProfile::DriveSection).to 
receive(:new_from_storage)
+          .with(disk).and_return(nil)
+        section = described_class.new_from_storage(devicegraph)
+        expect(section.drives).to eq([md_section, vg_section, dasd_section])
+      end
     end
 
-    it "ignores irrelevant drives" do
-      allow(Y2Storage::AutoinstProfile::DriveSection).to 
receive(:new_from_storage)
-        .with(disk).and_return(nil)
-      section = described_class.new_from_storage(devicegraph)
-      expect(section.drives).to eq([md_section, vg_section, dasd_section])
+    # Regression test for bug#1098594, BIOS RAIDs were exported as
+    # software-defined ones
+    context "with a BIOS MD RAID in the system" do
+      before do
+        fake_scenario("bug_1098594.xml")
+      end
+
+      it "creates only one entry in #drives, of type CT_DISK, for the BIOS 
RAID" do
+        section = described_class.new_from_storage(fake_devicegraph)
+        expect(section.drives.size).to eq 1
+
+        drive = section.drives.first
+        expect(drive.device).to eq "/dev/md/Volume0_0"
+        expect(drive.type).to eq :CT_DISK
+      end
     end
   end
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/yast2-storage-ng-4.0.195/test/y2storage/autoinst_proposal_test.rb 
new/yast2-storage-ng-4.0.199/test/y2storage/autoinst_proposal_test.rb
--- old/yast2-storage-ng-4.0.195/test/y2storage/autoinst_proposal_test.rb       
2018-07-02 18:03:16.000000000 +0200
+++ new/yast2-storage-ng-4.0.199/test/y2storage/autoinst_proposal_test.rb       
2018-07-23 16:25:46.000000000 +0200
@@ -244,6 +244,31 @@
           )
         end
       end
+
+      context "when the reused partition is in a DASD" do
+        let(:scenario) { "dasd_50GiB" }
+
+        let(:root) do
+          { "mount" => "/", "partition_nr" => 1, "create" => false }
+        end
+
+        # Regression test for bug#1098594:
+        # the partitions are on an Dasd (not a Disk), so when the code did
+        #   partition.disk
+        # it returned nil and produced an exception afterwards
+        it "does not crash" do
+          expect { proposal.propose }.to_not raise_error
+        end
+
+        it "reuses the partition with the given partition number" do
+          proposal.propose
+          reused_part = proposal.devices.partitions.find { |p| p.name == 
"/dev/sda1" }
+          expect(reused_part).to have_attributes(
+            filesystem_type:       Y2Storage::Filesystems::Type::EXT2,
+            filesystem_mountpoint: "/"
+          )
+        end
+      end
     end
 
     describe "resizing partitions" do
@@ -968,6 +993,69 @@
           )
         end
       end
+
+      # Regression test for bug#1098594
+      context "installing in a BIOS-defined MD RAID" do
+        let(:scenario) { "bug_1098594.xml" }
+
+        let(:partitioning) do
+          [
+            {
+              "device" => "/dev/md/Volume0_0", "use" => "3,4,9",
+              "partitions" => [efi_spec, root_spec, swap_spec]
+            }
+          ]
+        end
+
+        let(:efi_spec) do
+          {
+            "mount" => "/boot/efi", "create" => false, "partition_nr" => 3, 
"format" => true,
+            "filesystem" => "vfat", "mountby" => "uuid", "fstopt" => 
"umask=0002,utf8=true"
+          }
+        end
+
+        let(:root_spec) do
+          {
+            "mount" => "/", "create" => false, "partition_nr" => 4, "format" 
=> true,
+            "filesystem" => "xfs", "mountby" => "uuid"
+          }
+        end
+
+        let(:swap_spec) do
+          {
+            "mount" => "swap", "create" => false, "partition_nr" => 9, 
"format" => true,
+            "filesystem" => "swap", "mountby" => "device", "fstopt" => 
"defaults"
+          }
+        end
+
+        # bug#1098594, the partitions are on an Md (not a real disk), so when 
the code did
+        #   partition.disk
+        # it returned nil and produced an exception afterwards
+        it "does not crash" do
+          expect { proposal.propose }.to_not raise_error
+        end
+
+        it "formats the partitions of the RAID as requested" do
+          proposal.propose
+          devicegraph = proposal.devices
+
+          expect(devicegraph.raids).to contain_exactly(
+            an_object_having_attributes("name" => "/dev/md/Volume0_0")
+          )
+
+          part3 = devicegraph.find_by_name("/dev/md/Volume0_0p3")
+          expect(part3.filesystem.mount_path).to eq "/boot/efi"
+          expect(part3.filesystem.type).to eq 
Y2Storage::Filesystems::Type::VFAT
+
+          part4 = devicegraph.find_by_name("/dev/md/Volume0_0p4")
+          expect(part4.filesystem.mount_path).to eq "/"
+          expect(part4.filesystem.type).to eq Y2Storage::Filesystems::Type::XFS
+
+          part9 = devicegraph.find_by_name("/dev/md/Volume0_0p9")
+          expect(part9.filesystem.mount_path).to eq "swap"
+          expect(part9.filesystem.type).to eq 
Y2Storage::Filesystems::Type::SWAP
+        end
+      end
     end
 
     describe "LVM on RAID" do


Reply via email to