Hello community,

here is the log from the commit of package yast2-storage-ng for 
openSUSE:Factory checked in at 2020-06-27 23:22:15
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/yast2-storage-ng (Old)
 and      /work/SRC/openSUSE:Factory/.yast2-storage-ng.new.3060 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "yast2-storage-ng"

Sat Jun 27 23:22:15 2020 rev:84 rq:817273 version:4.3.9

Changes:
--------
--- /work/SRC/openSUSE:Factory/yast2-storage-ng/yast2-storage-ng.changes        
2020-06-10 00:37:49.173302203 +0200
+++ 
/work/SRC/openSUSE:Factory/.yast2-storage-ng.new.3060/yast2-storage-ng.changes  
    2020-06-27 23:22:18.549768798 +0200
@@ -1,0 +2,13 @@
+Fri Jun 26 14:02:34 UTC 2020 - Ancor Gonzalez Sosa <an...@suse.com>
+
+- Ensure consistent removal of LVM snapshots when the origin LV
+  is deleted (related to bsc#1120410).
+- 4.3.9
+
+-------------------------------------------------------------------
+Wed Jun 10 14:26:28 UTC 2020 - David Diaz <dgonza...@suse.com>
+
+- Partitioner: does not warn the user when the BIOS Boot partition
+  is missing in a XEN guest.
+
+-------------------------------------------------------------------

Old:
----
  yast2-storage-ng-4.3.8.tar.bz2

New:
----
  yast2-storage-ng-4.3.9.tar.bz2

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ yast2-storage-ng.spec ++++++
--- /var/tmp/diff_new_pack.mhsc86/_old  2020-06-27 23:22:19.153770784 +0200
+++ /var/tmp/diff_new_pack.mhsc86/_new  2020-06-27 23:22:19.157770797 +0200
@@ -17,7 +17,7 @@
 
 
 Name:           yast2-storage-ng
-Version:        4.3.8
+Version:        4.3.9
 Release:        0
 Summary:        YaST2 - Storage Configuration
 License:        GPL-2.0-only OR GPL-3.0-only
@@ -50,8 +50,8 @@
 Requires:       findutils
 # RB_RESIZE_NOT_SUPPORTED_DUE_TO_SNAPSHOTS
 Requires:       libstorage-ng-ruby >= 4.3.21
-# AutoYaST issue handling
-Requires:       yast2 >= 4.3.2
+# Updated Xen detection
+Requires:       yast2 >= 4.3.6
 # Y2Packager::Repository
 Requires:       yast2-packager >= 3.3.7
 # for AbortException and handle direct abort

++++++ yast2-storage-ng-4.3.8.tar.bz2 -> yast2-storage-ng-4.3.9.tar.bz2 ++++++
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/yast2-storage-ng-4.3.8/doc/lvm.md 
new/yast2-storage-ng-4.3.9/doc/lvm.md
--- old/yast2-storage-ng-4.3.8/doc/lvm.md       2020-06-03 17:02:05.000000000 
+0200
+++ new/yast2-storage-ng-4.3.9/doc/lvm.md       2020-06-26 16:06:40.000000000 
+0200
@@ -12,8 +12,8 @@
 operations it allows for each type of logical volume. At the current stage, 
some operations show an
 unexpected behavior and, in most cases, would need to be adjusted. That is 
represented in bold text.
 
-Note that, unlike RAID0, stripped LVs are not really a separate type. Many 
types of LVs can be
-stripped.
+Note that, unlike RAID0, striped LVs are not really a separate type. Many 
types of LVs can be
+striped.
 
 ### Normal LVM
 
@@ -32,7 +32,7 @@
 - Due to a bug, **nothing in the UI identifies the displayed LVs as being 
special**. They basically
   look like normal LVs, although `BlkDevicesTable::DEVICE_LABELS` contains 
entries for both thin
   pools and thin LVs.
-- In LVM is not possible to define stripping for thin LVs, they use the 
stripping defined for their thin
+- In LVM is not possible to define striping for thin LVs, they use the 
striping defined for their thin
   pools. The partitioner UI **reports 0 stripes for all thin LVs**.
 
 #### What can be done?
@@ -44,7 +44,7 @@
   - Delete: it works. Note it deletes the pool, all its thin volumes and the 
associated hidden LVs.
 
 - For thin LVs
-  - Create: it works. The **widgets for defining stripping are disabled and 
set to the default values**.
+  - Create: it works. The **widgets for defining striping are disabled and set 
to the default values**.
     Maybe it would be better to show them disabled but with the pool values. 
Or to not show them at all.
   - Edit (format/mount): just as a normal LV.
   - Resize: it works.
@@ -108,7 +108,7 @@
 
 #### What can be done?
 
-- Create: not possible (not to be confused with the possibility of creating 
stripped LVs).
+- Create: not possible (not to be confused with the possibility of creating 
striped LVs).
 - Edit (format/mount): just as a normal LV.
 - Resize: not allowed ("_Resizing of this type of LVM logical volumes is not 
supported_").
 - Delete: it works. Note it deletes also the corresponding subLVs.
Binary files 
old/yast2-storage-ng-4.3.8/doc/partitioner_ui/img/list-qdirstat.png and 
new/yast2-storage-ng-4.3.9/doc/partitioner_ui/img/list-qdirstat.png differ
Binary files 
old/yast2-storage-ng-4.3.8/doc/partitioner_ui/img/list-thunderbird.png and 
new/yast2-storage-ng-4.3.9/doc/partitioner_ui/img/list-thunderbird.png differ
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/yast2-storage-ng-4.3.8/doc/partitioner_ui.md 
new/yast2-storage-ng-4.3.9/doc/partitioner_ui.md
--- old/yast2-storage-ng-4.3.8/doc/partitioner_ui.md    2020-06-03 
17:02:05.000000000 +0200
+++ new/yast2-storage-ng-4.3.9/doc/partitioner_ui.md    2020-06-26 
16:06:40.000000000 +0200
@@ -56,9 +56,66 @@
 
 ![System view in text mode](partitioner_ui/img/system-ncurses.png)
 
-## Ideas
+## Agreed plan (so far)
 
-Possible plans to overcome the mentioned challenges and problems:
+This is the main plan to overcome the mentioned challenges and problems. 
Readers interested in the
+result can simply check this section.
+
+To make navigation more understandable we plan to introduce three big changes 
in the layout used
+by the Partitioner to present the full picture:
+
+- Use a menu to allocate global actions that do not really fit in other parts 
of the UI (like
+  rescanning devices) and also for some contextual options that are not common 
enough to justify a
+  button.
+- Turn the left tree into a plain list of possible "views" with some numbers 
indicating the number
+  of elements presented in each view.
+- Use nesting (with the possibility of expanding/collapsing) in the tables to 
better represent the
+  relationship of disks vs partitions, volume groups vs logical volumes, 
filesystems vs subvolumes,
+  etc.
+
+With all that, the previous screenshot will turn into something similar to 
this:
+
+```
+[Configure↓][View↓][Settings↓]
+
+   ┌View ──────────────┐Available Storage on guanche
+   │─System Overview   │┌──────────────────────────────────────────────────┐   
+   │─Hard Disks (3)    ││Device           │      Size│F│Enc│Type           │   
+   │─RAID (2)          ││┬─/dev/sda       │  8.00 GiB│ │   │HGST-HGST HTS72│   
+   │─Volume Manager (1)││├──/dev/sda1     │500.00 MiB│ │   │Part of EFI    │   
+   │─Bcache (0)        ││└──/dev/sda2     │  7.51 GiB│ │   │Part of OS     │   
+   │─NFS (0)           ││+─/dev/sdb       │468.00 GiB│ │   │Disk           │   
+   │─Btrfs (1)         ││┬─/dev/sdc       │  2.00 TiB│ │   │Disk           │ 
+   │                   ││└──/dev/sdc1     │ 12.00 GiB│ │   │FAT Partition  │
+   │                   ││──/dev/md/EFI    │499.94 MiB│ │   │FAT RAID       │   
+   │                   ││──/dev/md/OS     │  7.51 GiB│ │   │PV of system   │   
+   │                   ││┬─/dev/system    │  7.50 GiB│ │   │LVM            │   
+   │                   ││└──/dev/system/ro│  6.00 GiB│ │   │Btrfs LV       │   
+   │                   │└├───────────────────────────────┤─────────────────┘   
+   │                   │[Modify↓][Partitions↓]
+   └───────────────────┘                                                       
+ [ Help ]                                      [Abort]               [Finish] 
+
+```
+
+Of course, the look and feel of the table with nested elements may not be 
exactly as represented
+above. That widget still must be developed and could end up looking similar to 
the typical list of
+mails from a mail client (in which nesting is used to manage threads) or to 
the widgets currently
+used to display a hierarchy of directories in QDirStat.
+
+![Nested list in Thunderbird](partitioner_ui/img/list-thunderbird.png)
+
+![Nested list in QDirStat](partitioner_ui/img/list-qdirstat.png)
+
+## Other ideas
+
+Section with ideas and concepts that were important during the development of 
the current plan.
+Kept for completeness and for future reference, since we still plan to 
incorporate parts of them to
+the final implementation.
+
+### Initial ideas
+
+Initial ideas that were originally discussed and leaded to the current plan.
 
  * [Idea 0: template](partitioner_ui/ideas/template.md)
  * [Idea 1: Small Adjustements](partitioner_ui/ideas/adjustments.md)
@@ -70,7 +127,7 @@
  * [Idea 6: Constrain-based definitions](partitioner_ui/ideas/inventor.md)
  * [Idea 7: Simple Techs Menu and Global Menu 
Bar](partitioner_ui/ideas/grouped_techs.md)
 
-## Old ideas
+### Old ideas
 
 This section collects old partial ideas that were discarded or postponed 
during the development of
 the Partitioner in 15.1 or 15.2. Instead of proposing a whole revamp of the 
interface, they address
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/yast2-storage-ng-4.3.8/doc/xen-booting.md 
new/yast2-storage-ng-4.3.9/doc/xen-booting.md
--- old/yast2-storage-ng-4.3.8/doc/xen-booting.md       1970-01-01 
01:00:00.000000000 +0100
+++ new/yast2-storage-ng-4.3.9/doc/xen-booting.md       2020-06-26 
16:06:40.000000000 +0200
@@ -0,0 +1,17 @@
+# Booting a XEN guest
+
+According to the [guest boot
+process](https://wiki.xen.org/wiki/Booting_Overview), a BIOS Boot Partition is
+not needed to boot a XEN domU (the guest) unless using Grub2 for booting its 
own
+kernel instead of the one provided by the XEN dom0 (the host).
+
+Since the boot process for a XEN domU is defined in its configuration file, 
it's
+not possible to know it during the installation. For that reason, although the
+installer proposes the BIOS Boot partition for both XEN and non-XEN systems, 
the
+Partitioner will not warn the user when that partition is missing in a XEN
+guest.
+
+For its part, AutoYaST will [keep trying to add a boot
+device](https://github.com/yast/yast-storage-ng/blob/af944283d0fd2220973c8d51452365c040d684ba/doc/autoyast.md#phase-six-adding-boot-devices-if-needed).,
+which is not a problem because such attempt is just complementary and the
+installation will continue regardless of whether it succeeds or not.
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/yast2-storage-ng-4.3.8/package/yast2-storage-ng.changes 
new/yast2-storage-ng-4.3.9/package/yast2-storage-ng.changes
--- old/yast2-storage-ng-4.3.8/package/yast2-storage-ng.changes 2020-06-03 
17:02:05.000000000 +0200
+++ new/yast2-storage-ng-4.3.9/package/yast2-storage-ng.changes 2020-06-26 
16:06:40.000000000 +0200
@@ -1,4 +1,17 @@
 -------------------------------------------------------------------
+Fri Jun 26 14:02:34 UTC 2020 - Ancor Gonzalez Sosa <an...@suse.com>
+
+- Ensure consistent removal of LVM snapshots when the origin LV
+  is deleted (related to bsc#1120410).
+- 4.3.9
+
+-------------------------------------------------------------------
+Wed Jun 10 14:26:28 UTC 2020 - David Diaz <dgonza...@suse.com>
+
+- Partitioner: does not warn the user when the BIOS Boot partition
+  is missing in a XEN guest.
+
+-------------------------------------------------------------------
 Thu May 19 15:22:56 CEST 2020 - sch...@suse.de
 
 - AutoYaST: Cleanup/improve issue handling (bsc#1171335).
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/yast2-storage-ng-4.3.8/package/yast2-storage-ng.spec 
new/yast2-storage-ng-4.3.9/package/yast2-storage-ng.spec
--- old/yast2-storage-ng-4.3.8/package/yast2-storage-ng.spec    2020-06-03 
17:02:05.000000000 +0200
+++ new/yast2-storage-ng-4.3.9/package/yast2-storage-ng.spec    2020-06-26 
16:06:40.000000000 +0200
@@ -16,7 +16,7 @@
 #
 
 Name:           yast2-storage-ng
-Version:        4.3.8
+Version:        4.3.9
 Release:        0
 Summary:        YaST2 - Storage Configuration
 License:        GPL-2.0-only OR GPL-3.0-only
@@ -49,8 +49,8 @@
 Requires:       findutils
 # RB_RESIZE_NOT_SUPPORTED_DUE_TO_SNAPSHOTS
 Requires:       libstorage-ng-ruby >= 4.3.21
-# AutoYaST issue handling
-Requires:       yast2 >= 4.3.2
+# Updated Xen detection
+Requires:       yast2 >= 4.3.6
 # Y2Packager::Repository
 Requires:       yast2-packager >= 3.3.7
 # for AbortException and handle direct abort
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/yast2-storage-ng-4.3.8/src/lib/y2partitioner/actions/delete_lvm_lv.rb 
new/yast2-storage-ng-4.3.9/src/lib/y2partitioner/actions/delete_lvm_lv.rb
--- old/yast2-storage-ng-4.3.8/src/lib/y2partitioner/actions/delete_lvm_lv.rb   
2020-06-03 17:02:05.000000000 +0200
+++ new/yast2-storage-ng-4.3.9/src/lib/y2partitioner/actions/delete_lvm_lv.rb   
2020-06-26 16:06:40.000000000 +0200
@@ -47,36 +47,44 @@
       #
       # @return [Boolean]
       def confirm
-        used_pool? ? confirm_for_used_pool : super
+        affected_volumes? ? confirm_for_used_volume : super
       end
 
-      # Whether the device is a LVM thin pool and it contains any thin volume
+      # Whether deleting the device would result in other logical volumes also
+      # been deleted
       #
-      # @return [Boolean] true if it is an used pool; false otherwise.
-      def used_pool?
-        device.lv_type.is?(:thin_pool) && !device.lvm_lvs.empty?
+      # @return [Boolean] true if it is an used volume; false otherwise.
+      def affected_volumes?
+        device.descendants(Y2Storage::View::REMOVE).any? { |dev| 
dev.is?(:lvm_lv) }
       end
 
-      # Confirmation when the device is a LVM thin pool and there is any thin 
volume over it
+      # Confirmation when deleting the device affects other volumes
       #
       # @see ConfirmRecursiveDelete#confirm_recursive_delete
       #
       # @return [Boolean]
-      def confirm_for_used_pool
+      def confirm_for_used_volume
+        title =
+          if device.lv_type.is?(:thin_pool)
+            _("Confirm Deleting of LVM Thin Pool")
+          else
+            _("Confirm Deleting of LVM Logical Volume")
+          end
+
         confirm_recursive_delete(
           device,
-          _("Confirm Deleting of LVM Thin Pool"),
-          # TRANSLATORS: Confirmation message when a LVM thin pool is going to 
be deleted,
-          # where %{name} is replaced by the name of the thin pool (e.g., 
/dev/system/pool)
+          title,
+          # TRANSLATORS: Confirmation message when a LVM logical volume is 
going to be deleted,
+          # where %{name} is replaced by the name of the volume (e.g., 
/dev/system/pool)
           format(
-            _("The thin pool %{name} is used by at least one thin volume.\n" \
-              "If you proceed, the following thin volumes will be unmounted 
(if mounted)\n" \
+            _("The volume %{name} is used by at least one another volume.\n" \
+              "If you proceed, the following volumes will be unmounted (if 
mounted)\n" \
               "and deleted:"),
             name: device.name
           ),
-          # TRANSLATORS: %{name} is replaced by the name of the thin pool 
(e.g., /dev/system/pool)
+          # TRANSLATORS: %{name} is replaced by the name of the logical volume 
(e.g., /dev/system/pool)
           format(
-            _("Really delete the thin pool \"%{name}\" and all related thin 
volumes?"),
+            _("Really delete \"%{name}\" and all related volumes?"),
             name: device.name
           )
         )
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/yast2-storage-ng-4.3.8/src/lib/y2partitioner/confirm_recursive_delete.rb 
new/yast2-storage-ng-4.3.9/src/lib/y2partitioner/confirm_recursive_delete.rb
--- 
old/yast2-storage-ng-4.3.8/src/lib/y2partitioner/confirm_recursive_delete.rb    
    2020-06-03 17:02:05.000000000 +0200
+++ 
new/yast2-storage-ng-4.3.9/src/lib/y2partitioner/confirm_recursive_delete.rb    
    2020-06-26 16:06:40.000000000 +0200
@@ -117,7 +117,7 @@
     # @param device [Y2Storage::Device] device to delete
     # @return [Array<String>] name of dependent devices
     def dependent_devices(device)
-      device.descendants.map(&:display_name).compact
+      device.descendants(Y2Storage::View::REMOVE).map(&:display_name).compact
     end
   end
 end
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/yast2-storage-ng-4.3.8/src/lib/y2storage/autoinst_issues/no_disk.rb 
new/yast2-storage-ng-4.3.9/src/lib/y2storage/autoinst_issues/no_disk.rb
--- old/yast2-storage-ng-4.3.8/src/lib/y2storage/autoinst_issues/no_disk.rb     
2020-06-03 17:02:05.000000000 +0200
+++ new/yast2-storage-ng-4.3.9/src/lib/y2storage/autoinst_issues/no_disk.rb     
2020-06-26 16:06:40.000000000 +0200
@@ -50,7 +50,7 @@
           # TRANSLATORS: kernel device name (eg. '/dev/sda1')
           _("Disk '%s' was not found") % section.device
         else
-          _("Not suitable disk was found")
+          _("No suitable disk was found")
         end
       end
     end
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/yast2-storage-ng-4.3.8/src/lib/y2storage/boot_requirements_strategies/legacy.rb
 
new/yast2-storage-ng-4.3.9/src/lib/y2storage/boot_requirements_strategies/legacy.rb
--- 
old/yast2-storage-ng-4.3.8/src/lib/y2storage/boot_requirements_strategies/legacy.rb
 2020-06-03 17:02:05.000000000 +0200
+++ 
new/yast2-storage-ng-4.3.9/src/lib/y2storage/boot_requirements_strategies/legacy.rb
 2020-06-26 16:06:40.000000000 +0200
@@ -17,9 +17,12 @@
 # To contact SUSE LLC about this file by physical or electronic mail, you may
 # find current contact information at www.suse.com.
 
+require "yast"
 require "y2storage/boot_requirements_strategies/base"
 require "y2storage/partition_id"
 
+Yast.import "Arch"
+
 module Y2Storage
   module BootRequirementsStrategies
     # Strategy to calculate the boot requirements in a legacy system (x86 
without EFI)
@@ -176,7 +179,7 @@
       def errors_on_gpt
         errors = []
 
-        if grub_part_needed_in_gpt? && missing_partition_for?(grub_volume)
+        if include_bios_boot_warning?
           errors << bios_boot_missing_error
           errors << grub_embedding_error
         end
@@ -279,6 +282,15 @@
         )
         SetupError.new(message: message)
       end
+
+      # Whether the warning about missing BIOS Boot partition should be 
included
+      #
+      # return [Boolean] true when a needed Grub partition is missing, unless 
running in a XEN domU
+      def include_bios_boot_warning?
+        return false if Yast::Arch.is_xenU
+
+        grub_part_needed_in_gpt? && missing_partition_for?(grub_volume)
+      end
     end
   end
 end
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/yast2-storage-ng-4.3.8/src/lib/y2storage/device.rb 
new/yast2-storage-ng-4.3.9/src/lib/y2storage/device.rb
--- old/yast2-storage-ng-4.3.8/src/lib/y2storage/device.rb      2020-06-03 
17:02:05.000000000 +0200
+++ new/yast2-storage-ng-4.3.9/src/lib/y2storage/device.rb      2020-06-26 
16:06:40.000000000 +0200
@@ -242,10 +242,11 @@
     #   requires an argument to decide if the device itself should be included 
in
     #   the result.
     #
+    # @param view [View] filter used to determine the descendants
     # @return [Array<Device>]
-    def descendants
+    def descendants(view = View::CLASSIC)
       itself = false
-      storage_descendants(itself)
+      storage_descendants(itself, view)
     end
 
     # Siblings in the devicegraph in no particular order, not including the
@@ -394,9 +395,15 @@
       update_parents_etc_status
     end
 
-    # Removes device descendants in the devicegraph
-    def remove_descendants
-      storage_remove_descendants
+    # Removes all devices that are descendants of this one in the devicegraph,
+    # according to the specified (optional) view
+    #
+    # The view should likely always be REMOVE, since it's the only one that
+    # ensures a behavior that is consistent with the system tools.
+    #
+    # @param view [View] filter used to determine the descendants
+    def remove_descendants(view = View::REMOVE)
+      storage_remove_descendants(view)
       update_etc_status
     end
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/yast2-storage-ng-4.3.8/src/lib/y2storage/lvm_vg.rb 
new/yast2-storage-ng-4.3.9/src/lib/y2storage/lvm_vg.rb
--- old/yast2-storage-ng-4.3.8/src/lib/y2storage/lvm_vg.rb      2020-06-03 
17:02:05.000000000 +0200
+++ new/yast2-storage-ng-4.3.9/src/lib/y2storage/lvm_vg.rb      2020-06-26 
16:06:40.000000000 +0200
@@ -114,12 +114,19 @@
     #   @return [LvmLv]
     storage_forward :create_lvm_lv, as: "LvmLv"
 
-    # @!method delete_lvm_lv(lvm_lv)
-    #   Deletes a logical volume in the volume group. Also deletes all
-    #   descendants of the logical volume.
+    storage_forward :storage_delete_lvm_lv, to: :delete_lvm_lv
+    private :storage_delete_lvm_lv
+
+    # Deletes a logical volume in the volume group. Also deletes all
+    # descendants of the logical volume.
     #
-    #   @param lvm_lv [LvmLv] volume to delete
-    storage_forward :delete_lvm_lv
+    # @param lv [LvmLv] volume to delete
+    def delete_lvm_lv(lv)
+      # Needed to enforce the REMOVE view when deleting descendants
+      lv.remove_descendants
+
+      storage_delete_lvm_lv(lv)
+    end
 
     # @!method max_size_for_lvm_lv(lv_type)
     #   Returns the max size for a new logical volume of type lv_type. The 
size may
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/yast2-storage-ng-4.3.8/src/lib/y2storage/partition_tables/base.rb 
new/yast2-storage-ng-4.3.9/src/lib/y2storage/partition_tables/base.rb
--- old/yast2-storage-ng-4.3.8/src/lib/y2storage/partition_tables/base.rb       
2020-06-03 17:02:05.000000000 +0200
+++ new/yast2-storage-ng-4.3.9/src/lib/y2storage/partition_tables/base.rb       
2020-06-26 16:06:40.000000000 +0200
@@ -66,8 +66,18 @@
       # Deletes the given partition in the partition table and all its
       # descendants.
       #
-      # @param partition [Partition]
+      # @param partition [Partition, String] device or device name
       def delete_partition(partition, *extra_args)
+        part_obj =
+          if partition.is_a?(String)
+            partitions.find { |part| part.name == partition }
+          else
+            partition
+          end
+
+        # Needed to enforce the REMOVE view when deleting descendants
+        part_obj&.remove_descendants
+
         storage_delete_partition(partition, *extra_args)
         Encryption.update_dm_names(devicegraph)
       end
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/yast2-storage-ng-4.3.8/src/lib/y2storage/view.rb 
new/yast2-storage-ng-4.3.9/src/lib/y2storage/view.rb
--- old/yast2-storage-ng-4.3.8/src/lib/y2storage/view.rb        1970-01-01 
01:00:00.000000000 +0100
+++ new/yast2-storage-ng-4.3.9/src/lib/y2storage/view.rb        2020-06-26 
16:06:40.000000000 +0200
@@ -0,0 +1,32 @@
+# Copyright (c) [2020] SUSE LLC
+#
+# All Rights Reserved.
+#
+# This program is free software; you can redistribute it and/or modify it
+# under the terms of version 2 of the GNU General Public License as published
+# by the Free Software Foundation.
+#
+# This program is distributed in the hope that it will be useful, but WITHOUT
+# ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+# FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+# more details.
+#
+# You should have received a copy of the GNU General Public License along
+# with this program; if not, contact SUSE LLC.
+#
+# To contact SUSE LLC about this file by physical or electronic mail, you may
+# find current contact information at www.suse.com.
+
+require "y2storage/storage_enum_wrapper"
+
+module Y2Storage
+  # Class to represent the different views that can be used in some functions
+  # to filter certain nodes or edges in a devicegraph
+  #
+  # This is a wrapper for the Storage::View enum
+  class View
+    include StorageEnumWrapper
+
+    wrap_enum "View"
+  end
+end
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/yast2-storage-ng-4.3.8/src/lib/y2storage.rb 
new/yast2-storage-ng-4.3.9/src/lib/y2storage.rb
--- old/yast2-storage-ng-4.3.8/src/lib/y2storage.rb     2020-06-03 
17:02:05.000000000 +0200
+++ new/yast2-storage-ng-4.3.9/src/lib/y2storage.rb     2020-06-26 
16:06:40.000000000 +0200
@@ -17,6 +17,7 @@
 # To contact SUSE LLC about this file by physical or electronic mail, you may
 # find current contact information at www.suse.com.
 
+require "y2storage/view"
 require "y2storage/devicegraph"
 require "y2storage/actiongraph"
 require "y2storage/compound_action"
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/yast2-storage-ng-4.3.8/test/data/devicegraphs/lvm-types1.xml 
new/yast2-storage-ng-4.3.9/test/data/devicegraphs/lvm-types1.xml
--- old/yast2-storage-ng-4.3.8/test/data/devicegraphs/lvm-types1.xml    
1970-01-01 01:00:00.000000000 +0100
+++ new/yast2-storage-ng-4.3.9/test/data/devicegraphs/lvm-types1.xml    
2020-06-26 16:06:40.000000000 +0200
@@ -0,0 +1,583 @@
+<?xml version="1.0"?>
+<!-- generated by libstorage-ng version 4.3.27, guanche.site.(none), 
2020-06-19 14:28:57 GMT -->
+<Devicegraph>
+  <Devices>
+    <Disk>
+      <sid>42</sid>
+      <name>/dev/sdb</name>
+      <sysfs-name>sdb</sysfs-name>
+      
<sysfs-path>/devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.2/1-1.2:1.0/host6/target6:0:0/6:0:0:0/block/sdb</sysfs-path>
+      <region>
+        <length>62530624</length>
+        <block-size>512</block-size>
+      </region>
+      <udev-id>scsi-1SanDisk_Cruzer_Blade_4C532000050909104131</udev-id>
+      <udev-id>scsi-SSanDisk_Cruzer_Blade_4C532000050909104131</udev-id>
+      <udev-id>usb-SanDisk_Cruzer_Blade_4C532000050909104131-0:0</udev-id>
+      <range>256</range>
+      <transport>USB</transport>
+    </Disk>
+    <Disk>
+      <sid>43</sid>
+      <name>/dev/sda</name>
+      <sysfs-name>sda</sysfs-name>
+      
<sysfs-path>/devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.2/1-1.2:1.0/host6/target6:0:0/6:0:0:0/block/sda</sysfs-path>
+      <region>
+        <length>62530624</length>
+        <block-size>512</block-size>
+      </region>
+      <udev-id>scsi-1SanDisk_Cruzer_Blade_4C532000050909104333</udev-id>
+      <udev-id>scsi-SSanDisk_Cruzer_Blade_4C532000050909104333</udev-id>
+      <udev-id>usb-SanDisk_Cruzer_Blade_4C532000050909104333-0:0</udev-id>
+      <range>256</range>
+      <transport>USB</transport>
+    </Disk>
+    <LvmVg>
+      <sid>45</sid>
+      <vg-name>vg0</vg-name>
+      <uuid>ZHz9Oi-WbLO-4Ym6-wm2e-77c3-CV2r-tgtZdW</uuid>
+      <region>
+        <length>7166</length>
+        <block-size>4194304</block-size>
+      </region>
+      <reserved-extents>334</reserved-extents>
+    </LvmVg>
+    <LvmPv>
+      <sid>47</sid>
+      <uuid>86FSBF-Vd8V-y0XV-Fgl4-WV1h-ZQ0S-fsH4p6</uuid>
+      <pe-start>1048576</pe-start>
+    </LvmPv>
+    <LvmPv>
+      <sid>48</sid>
+      <uuid>BPC2Wu-qFpw-yuPm-9SFK-VKEU-q5vj-xMPOZO</uuid>
+      <pe-start>1048576</pe-start>
+    </LvmPv>
+    <LvmLv>
+      <sid>49</sid>
+      <name>/dev/vg0/cached1</name>
+      <sysfs-name>dm-25</sysfs-name>
+      <sysfs-path>/devices/virtual/block/dm-25</sysfs-path>
+      <region>
+        <length>128</length>
+        <block-size>4194304</block-size>
+      </region>
+      <dm-table-name>vg0-cached1</dm-table-name>
+      <lv-name>cached1</lv-name>
+      <lv-type>cache</lv-type>
+      <uuid>TWSVLb-1qN8-JWKg-5Gc4-qP0l-cR2G-xHdJNw</uuid>
+      <used-extents>128</used-extents>
+      <stripes>1</stripes>
+      <chunk-size>65536</chunk-size>
+    </LvmLv>
+    <LvmLv>
+      <sid>50</sid>
+      <name>/dev/vg0/cached2</name>
+      <sysfs-name>dm-30</sysfs-name>
+      <sysfs-path>/devices/virtual/block/dm-30</sysfs-path>
+      <region>
+        <length>128</length>
+        <block-size>4194304</block-size>
+      </region>
+      <dm-table-name>vg0-cached2</dm-table-name>
+      <lv-name>cached2</lv-name>
+      <lv-type>cache</lv-type>
+      <uuid>kOolqc-S1xl-UTFe-PASg-eYdP-UpRV-XWVG7U</uuid>
+      <used-extents>128</used-extents>
+      <stripes>1</stripes>
+      <chunk-size>65536</chunk-size>
+    </LvmLv>
+    <LvmLv>
+      <sid>53</sid>
+      <name>/dev/vg0/mirror_lv1</name>
+      <sysfs-name>dm-43</sysfs-name>
+      <sysfs-path>/devices/virtual/block/dm-43</sysfs-path>
+      <region>
+        <length>128</length>
+        <block-size>4194304</block-size>
+      </region>
+      <dm-table-name>vg0-mirror_lv1</dm-table-name>
+      <lv-name>mirror_lv1</lv-name>
+      <lv-type>raid</lv-type>
+      <uuid>52SSTf-YXnZ-6fkf-aUkn-vvfz-IzPl-S8t8dL</uuid>
+      <used-extents>128</used-extents>
+      <stripes>2</stripes>
+    </LvmLv>
+    <LvmLv>
+      <sid>54</sid>
+      <name>/dev/vg0/normal1</name>
+      <sysfs-name>dm-24</sysfs-name>
+      <sysfs-path>/devices/virtual/block/dm-24</sysfs-path>
+      <region>
+        <length>384</length>
+        <block-size>4194304</block-size>
+      </region>
+      <dm-table-name>vg0-normal1</dm-table-name>
+      <lv-name>normal1</lv-name>
+      <lv-type>normal</lv-type>
+      <uuid>T1y9iN-XB6f-rMXg-ZFHR-2jei-U85T-Es9U2y</uuid>
+      <used-extents>384</used-extents>
+      <stripes>1</stripes>
+    </LvmLv>
+    <LvmLv>
+      <sid>55</sid>
+      <name>/dev/vg0/normal2</name>
+      <sysfs-name>dm-48</sysfs-name>
+      <sysfs-path>/devices/virtual/block/dm-48</sysfs-path>
+      <read-only>true</read-only>
+      <region>
+        <length>128</length>
+        <block-size>4194304</block-size>
+      </region>
+      <dm-table-name>vg0-normal2</dm-table-name>
+      <lv-name>normal2</lv-name>
+      <lv-type>normal</lv-type>
+      <uuid>0TCeHZ-Jc2F-TQZi-BSx9-NesQ-OUZN-GS9vKU</uuid>
+      <used-extents>128</used-extents>
+      <stripes>1</stripes>
+    </LvmLv>
+    <LvmLv>
+      <sid>56</sid>
+      <name>/dev/vg0/normal3</name>
+      <sysfs-name>dm-22</sysfs-name>
+      <sysfs-path>/devices/virtual/block/dm-22</sysfs-path>
+      <region>
+        <length>128</length>
+        <block-size>4194304</block-size>
+      </region>
+      <dm-table-name>vg0-normal3</dm-table-name>
+      <lv-name>normal3</lv-name>
+      <lv-type>normal</lv-type>
+      <uuid>2C9oG2-DBXc-REZS-euJL-VEdK-Mbsl-fzad8X</uuid>
+      <used-extents>128</used-extents>
+      <stripes>1</stripes>
+    </LvmLv>
+    <LvmLv>
+      <sid>60</sid>
+      <name>/dev/vg0/raid1_lv1</name>
+      <sysfs-name>dm-38</sysfs-name>
+      <sysfs-path>/devices/virtual/block/dm-38</sysfs-path>
+      <region>
+        <length>128</length>
+        <block-size>4194304</block-size>
+      </region>
+      <dm-table-name>vg0-raid1_lv1</dm-table-name>
+      <lv-name>raid1_lv1</lv-name>
+      <lv-type>raid</lv-type>
+      <uuid>8sUmRc-IRUe-Wuta-C2oa-jgZc-mnHM-rstbe1</uuid>
+      <used-extents>128</used-extents>
+      <stripes>2</stripes>
+    </LvmLv>
+    <LvmLv>
+      <sid>62</sid>
+      <name>/dev/vg0/snap_normal1</name>
+      <sysfs-name>dm-46</sysfs-name>
+      <sysfs-path>/devices/virtual/block/dm-46</sysfs-path>
+      <region>
+        <length>384</length>
+        <block-size>4194304</block-size>
+      </region>
+      <dm-table-name>vg0-snap_normal1</dm-table-name>
+      <lv-name>snap_normal1</lv-name>
+      <lv-type>snapshot</lv-type>
+      <uuid>ydbf9U-xc0H-649B-wpF2-qrF2-DlHc-dTfDFL</uuid>
+      <used-extents>28</used-extents>
+      <stripes>1</stripes>
+    </LvmLv>
+    <LvmLv>
+      <sid>63</sid>
+      <name>/dev/vg0/striped1</name>
+      <sysfs-name>dm-21</sysfs-name>
+      <sysfs-path>/devices/virtual/block/dm-21</sysfs-path>
+      <region>
+        <length>128</length>
+        <block-size>4194304</block-size>
+      </region>
+      <dm-table-name>vg0-striped1</dm-table-name>
+      <lv-name>striped1</lv-name>
+      <lv-type>normal</lv-type>
+      <uuid>BTOGuF-YIOa-OBvu-Jnm8-n8mq-JTq6-6qxWbC</uuid>
+      <used-extents>128</used-extents>
+      <stripes>2</stripes>
+      <stripe-size>4096</stripe-size>
+    </LvmLv>
+    <LvmLv>
+      <sid>64</sid>
+      <name>/dev/vg0/striped2</name>
+      <sysfs-name>dm-20</sysfs-name>
+      <sysfs-path>/devices/virtual/block/dm-20</sysfs-path>
+      <region>
+        <length>128</length>
+        <block-size>4194304</block-size>
+      </region>
+      <dm-table-name>vg0-striped2</dm-table-name>
+      <lv-name>striped2</lv-name>
+      <lv-type>normal</lv-type>
+      <uuid>6lIGTh-e044-SAkZ-Clyv-B9Qk-teMz-BP3UB5</uuid>
+      <used-extents>128</used-extents>
+      <stripes>2</stripes>
+      <stripe-size>4096</stripe-size>
+    </LvmLv>
+    <LvmLv>
+      <sid>66</sid>
+      <name>/dev/vg0/thinpool0</name>
+      <active>false</active>
+      <region>
+        <length>768</length>
+        <block-size>4194304</block-size>
+      </region>
+      <dm-table-name>vg0-thinpool0</dm-table-name>
+      <lv-name>thinpool0</lv-name>
+      <lv-type>thin-pool</lv-type>
+      <uuid>pRFqxd-SUdz-9ykH-FRU9-L0xn-4LYd-utEwxp</uuid>
+      <used-extents>768</used-extents>
+      <stripes>1</stripes>
+      <chunk-size>65536</chunk-size>
+    </LvmLv>
+    <LvmLv>
+      <sid>68</sid>
+      <name>/dev/vg0/unused_cache_pool</name>
+      <active>false</active>
+      <region>
+        <length>32</length>
+        <block-size>4194304</block-size>
+      </region>
+      <dm-table-name>vg0-unused_cache_pool</dm-table-name>
+      <lv-name>unused_cache_pool</lv-name>
+      <lv-type>cache-pool</lv-type>
+      <uuid>f3Ic1e-5822-asSV-M4e9-rUlE-PQj1-jjbuRL</uuid>
+      <used-extents>32</used-extents>
+      <stripes>1</stripes>
+      <chunk-size>65536</chunk-size>
+    </LvmLv>
+    <LvmLv>
+      <sid>69</sid>
+      <name>/dev/vg0/snap_snap_thinvol1</name>
+      <active>false</active>
+      <region>
+        <length>128</length>
+        <block-size>4194304</block-size>
+      </region>
+      <dm-table-name>vg0-snap_snap_thinvol1</dm-table-name>
+      <lv-name>snap_snap_thinvol1</lv-name>
+      <lv-type>thin</lv-type>
+      <uuid>2woFvd-cnl1-GQki-iYPV-9eNw-JXhx-PwZQp1</uuid>
+    </LvmLv>
+    <LvmLv>
+      <sid>71</sid>
+      <name>/dev/vg0/snap_thinvol1</name>
+      <active>false</active>
+      <region>
+        <length>128</length>
+        <block-size>4194304</block-size>
+      </region>
+      <dm-table-name>vg0-snap_thinvol1</dm-table-name>
+      <lv-name>snap_thinvol1</lv-name>
+      <lv-type>thin</lv-type>
+      <uuid>TQmOdb-scID-NUPD-jVDL-FeB5-CSwh-hv4gfu</uuid>
+    </LvmLv>
+    <LvmLv>
+      <sid>72</sid>
+      <name>/dev/vg0/thin_snap_normal2</name>
+      <sysfs-name>dm-47</sysfs-name>
+      <sysfs-path>/devices/virtual/block/dm-47</sysfs-path>
+      <region>
+        <length>128</length>
+        <block-size>4194304</block-size>
+      </region>
+      <dm-table-name>vg0-thin_snap_normal2</dm-table-name>
+      <lv-name>thin_snap_normal2</lv-name>
+      <lv-type>thin</lv-type>
+      <uuid>UrZQls-Idbz-IdZh-q0DC-OD0E-x8tg-6lsvzy</uuid>
+    </LvmLv>
+    <LvmLv>
+      <sid>73</sid>
+      <name>/dev/vg0/thinvol1</name>
+      <sysfs-name>dm-19</sysfs-name>
+      <sysfs-path>/devices/virtual/block/dm-19</sysfs-path>
+      <region>
+        <length>128</length>
+        <block-size>4194304</block-size>
+      </region>
+      <dm-table-name>vg0-thinvol1</dm-table-name>
+      <lv-name>thinvol1</lv-name>
+      <lv-type>thin</lv-type>
+      <uuid>OiYMvq-FozJ-6K21-NEnl-NEKg-UMpL-FMN3JJ</uuid>
+    </LvmLv>
+    <LvmLv>
+      <sid>74</sid>
+      <name>/dev/vg0/thinvol2</name>
+      <sysfs-name>dm-18</sysfs-name>
+      <sysfs-path>/devices/virtual/block/dm-18</sysfs-path>
+      <region>
+        <length>128</length>
+        <block-size>4194304</block-size>
+      </region>
+      <dm-table-name>vg0-thinvol2</dm-table-name>
+      <lv-name>thinvol2</lv-name>
+      <lv-type>thin</lv-type>
+      <uuid>Bg52lt-vtce-fZw3-QxIS-Ssvb-GsiJ-eDO5gD</uuid>
+    </LvmLv>
+    <Gpt>
+      <sid>75</sid>
+    </Gpt>
+    <Partition>
+      <sid>76</sid>
+      <name>/dev/sdb1</name>
+      <sysfs-name>sdb1</sysfs-name>
+      
<sysfs-path>/devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.2/1-1.2:1.0/host6/target6:0:0/6:0:0:0/block/sdb/sdb1</sysfs-path>
+      <region>
+        <start>2048</start>
+        <length>29360128</length>
+        <block-size>512</block-size>
+      </region>
+      <udev-id>scsi-1SanDisk_Cruzer_Blade_4C532000050909104131-part1</udev-id>
+      <udev-id>scsi-SSanDisk_Cruzer_Blade_4C532000050909104131-part1</udev-id>
+      
<udev-id>usb-SanDisk_Cruzer_Blade_4C532000050909104131-0:0-part1</udev-id>
+      <type>primary</type>
+      <id>142</id>
+      <uuid>03eccc83-73d3-f546-8f8c-e800f3002059</uuid>
+    </Partition>
+    <Gpt>
+      <sid>78</sid>
+    </Gpt>
+    <Partition>
+      <sid>77</sid>
+      <name>/dev/sda1</name>
+      <sysfs-name>sda1</sysfs-name>
+      
<sysfs-path>/devices/pci0000:00/0000:00:1a.0/usb1/1-1/1-1.2/1-1.2:1.0/host6/target6:0:0/6:0:0:0/block/sdb/sda1</sysfs-path>
+      <region>
+        <start>2048</start>
+        <length>29360128</length>
+        <block-size>512</block-size>
+      </region>
+      <udev-id>scsi-1SanDisk_Cruzer_Blade_4C532000050909104333-part1</udev-id>
+      <udev-id>scsi-SSanDisk_Cruzer_Blade_4C532000050909104333-part1</udev-id>
+      
<udev-id>usb-SanDisk_Cruzer_Blade_4C532000050909104333-0:0-part1</udev-id>
+      <type>primary</type>
+      <id>142</id>
+      <uuid>f3d82e13-d36b-cb4a-92fc-387f64976de3</uuid>
+    </Partition>
+    <Xfs>
+      <sid>86</sid>
+      <uuid>9efa9f3f-8cc1-42a4-8f1f-82e297135791</uuid>
+    </Xfs>
+    <Ext3>
+      <sid>87</sid>
+      <uuid>9e15f6a7-b2bc-4beb-aecb-dfebfed1ed19</uuid>
+    </Ext3>
+    <Ext2>
+      <sid>89</sid>
+      <uuid>aaf6ab01-b53e-4de1-b48e-8d7035570271</uuid>
+    </Ext2>
+    <Ext2>
+      <sid>90</sid>
+      <uuid>80bcd670-b506-4dc3-a031-e27ce78d336e</uuid>
+    </Ext2>
+    <Ext3>
+      <sid>91</sid>
+      <uuid>b468ae98-3a22-42df-9bba-d38511077900</uuid>
+    </Ext3>
+    <Ext4>
+      <sid>92</sid>
+      <uuid>dc12ef79-8b06-4581-9df8-f58d2aa613eb</uuid>
+    </Ext4>
+    <Ext4>
+      <sid>98</sid>
+      <uuid>07e3cffa-fe4e-4fe8-8f8a-3ad92d61ebaa</uuid>
+    </Ext4>
+    <Ext2>
+      <sid>100</sid>
+      <uuid>80bcd670-b506-4dc3-a031-e27ce78d336e</uuid>
+    </Ext2>
+    <Xfs>
+      <sid>101</sid>
+      <uuid>5b3e36d7-7cb8-4507-adf8-d5823f87437a</uuid>
+    </Xfs>
+    <Ext2>
+      <sid>102</sid>
+      <uuid>8f0d0569-5299-4f56-bdc2-67781c1e318f</uuid>
+    </Ext2>
+    <Ext3>
+      <sid>105</sid>
+      <uuid>b468ae98-3a22-42df-9bba-d38511077900</uuid>
+    </Ext3>
+    <Ext4>
+      <sid>106</sid>
+      <uuid>8d97e8e4-d0ef-47d2-8062-6fc6c2018eae</uuid>
+    </Ext4>
+    <Xfs>
+      <sid>107</sid>
+      <uuid>4213af35-6f2d-45b4-90ba-e5673e0b03c0</uuid>
+    </Xfs>
+  </Devices>
+  <Holders>
+    <Subdevice>
+      <source-sid>47</source-sid>
+      <target-sid>45</target-sid>
+    </Subdevice>
+    <Subdevice>
+      <source-sid>48</source-sid>
+      <target-sid>45</target-sid>
+    </Subdevice>
+    <Subdevice>
+      <source-sid>45</source-sid>
+      <target-sid>49</target-sid>
+    </Subdevice>
+    <Subdevice>
+      <source-sid>45</source-sid>
+      <target-sid>50</target-sid>
+    </Subdevice>
+    <Subdevice>
+      <source-sid>45</source-sid>
+      <target-sid>53</target-sid>
+    </Subdevice>
+    <Subdevice>
+      <source-sid>45</source-sid>
+      <target-sid>54</target-sid>
+    </Subdevice>
+    <Subdevice>
+      <source-sid>45</source-sid>
+      <target-sid>55</target-sid>
+    </Subdevice>
+    <Subdevice>
+      <source-sid>45</source-sid>
+      <target-sid>56</target-sid>
+    </Subdevice>
+    <Subdevice>
+      <source-sid>45</source-sid>
+      <target-sid>60</target-sid>
+    </Subdevice>
+    <Subdevice>
+      <source-sid>45</source-sid>
+      <target-sid>62</target-sid>
+    </Subdevice>
+    <Subdevice>
+      <source-sid>45</source-sid>
+      <target-sid>63</target-sid>
+    </Subdevice>
+    <Subdevice>
+      <source-sid>45</source-sid>
+      <target-sid>64</target-sid>
+    </Subdevice>
+    <Subdevice>
+      <source-sid>45</source-sid>
+      <target-sid>66</target-sid>
+    </Subdevice>
+    <Subdevice>
+      <source-sid>45</source-sid>
+      <target-sid>68</target-sid>
+    </Subdevice>
+    <Subdevice>
+      <source-sid>66</source-sid>
+      <target-sid>69</target-sid>
+    </Subdevice>
+    <Subdevice>
+      <source-sid>66</source-sid>
+      <target-sid>71</target-sid>
+    </Subdevice>
+    <Subdevice>
+      <source-sid>66</source-sid>
+      <target-sid>72</target-sid>
+    </Subdevice>
+    <Subdevice>
+      <source-sid>66</source-sid>
+      <target-sid>73</target-sid>
+    </Subdevice>
+    <Subdevice>
+      <source-sid>66</source-sid>
+      <target-sid>74</target-sid>
+    </Subdevice>
+    <Snapshot>
+      <source-sid>54</source-sid>
+      <target-sid>62</target-sid>
+    </Snapshot>
+    <Snapshot>
+      <source-sid>71</source-sid>
+      <target-sid>69</target-sid>
+    </Snapshot>
+    <Snapshot>
+      <source-sid>73</source-sid>
+      <target-sid>71</target-sid>
+    </Snapshot>
+    <Snapshot>
+      <source-sid>55</source-sid>
+      <target-sid>72</target-sid>
+    </Snapshot>
+    <User>
+      <source-sid>42</source-sid>
+      <target-sid>75</target-sid>
+    </User>
+    <Subdevice>
+      <source-sid>75</source-sid>
+      <target-sid>76</target-sid>
+    </Subdevice>
+    <Subdevice>
+      <source-sid>78</source-sid>
+      <target-sid>77</target-sid>
+    </Subdevice>
+    <User>
+      <source-sid>43</source-sid>
+      <target-sid>78</target-sid>
+    </User>
+    <User>
+      <source-sid>76</source-sid>
+      <target-sid>47</target-sid>
+    </User>
+    <User>
+      <source-sid>77</source-sid>
+      <target-sid>48</target-sid>
+    </User>
+    <FilesystemUser>
+      <source-sid>49</source-sid>
+      <target-sid>86</target-sid>
+    </FilesystemUser>
+    <FilesystemUser>
+      <source-sid>50</source-sid>
+      <target-sid>87</target-sid>
+    </FilesystemUser>
+    <FilesystemUser>
+      <source-sid>53</source-sid>
+      <target-sid>89</target-sid>
+    </FilesystemUser>
+    <FilesystemUser>
+      <source-sid>54</source-sid>
+      <target-sid>90</target-sid>
+    </FilesystemUser>
+    <FilesystemUser>
+      <source-sid>55</source-sid>
+      <target-sid>91</target-sid>
+    </FilesystemUser>
+    <FilesystemUser>
+      <source-sid>56</source-sid>
+      <target-sid>92</target-sid>
+    </FilesystemUser>
+    <FilesystemUser>
+      <source-sid>60</source-sid>
+      <target-sid>98</target-sid>
+    </FilesystemUser>
+    <FilesystemUser>
+      <source-sid>62</source-sid>
+      <target-sid>100</target-sid>
+    </FilesystemUser>
+    <FilesystemUser>
+      <source-sid>63</source-sid>
+      <target-sid>101</target-sid>
+    </FilesystemUser>
+    <FilesystemUser>
+      <source-sid>64</source-sid>
+      <target-sid>102</target-sid>
+    </FilesystemUser>
+    <FilesystemUser>
+      <source-sid>72</source-sid>
+      <target-sid>105</target-sid>
+    </FilesystemUser>
+    <FilesystemUser>
+      <source-sid>73</source-sid>
+      <target-sid>106</target-sid>
+    </FilesystemUser>
+    <FilesystemUser>
+      <source-sid>74</source-sid>
+      <target-sid>107</target-sid>
+    </FilesystemUser>
+  </Holders>
+</Devicegraph>
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/yast2-storage-ng-4.3.8/test/y2partitioner/actions/delete_lvm_lv_test.rb 
new/yast2-storage-ng-4.3.9/test/y2partitioner/actions/delete_lvm_lv_test.rb
--- old/yast2-storage-ng-4.3.8/test/y2partitioner/actions/delete_lvm_lv_test.rb 
2020-06-03 17:02:05.000000000 +0200
+++ new/yast2-storage-ng-4.3.9/test/y2partitioner/actions/delete_lvm_lv_test.rb 
2020-06-26 16:06:40.000000000 +0200
@@ -25,18 +25,20 @@
 
 describe Y2Partitioner::Actions::DeleteLvmLv do
   before do
-    devicegraph_stub("lvm-two-vgs.yml")
+    devicegraph_stub(scenario)
   end
+  let(:scenario) { "lvm-two-vgs.yml" }
 
   subject { described_class.new(device) }
 
   let(:device) { Y2Storage::BlkDevice.find_by_name(current_graph, device_name) 
}
 
   let(:current_graph) { Y2Partitioner::DeviceGraphs.instance.current }
+  let(:vg_name) { "vg1" }
 
   describe "#run" do
     before do
-      vg = Y2Storage::LvmVg.find_by_vg_name(current_graph, "vg1")
+      vg = Y2Storage::LvmVg.find_by_vg_name(current_graph, vg_name)
       create_thin_provisioning(vg)
 
       allow(Yast2::Popup).to receive(:show).and_return(accept)
@@ -45,7 +47,7 @@
 
     let(:accept) { nil }
 
-    context "when the logical volume is not a used thin pool" do
+    context "when the logical volume is a normal one without snapshots" do
       let(:device_name) { "/dev/vg1/lv1" }
 
       it "shows a confirmation message with the device name" do
@@ -65,6 +67,20 @@
           .and_call_original
 
         subject.run
+      end
+    end
+
+    context "when the logical volume is a normal one with snapshots" do
+      let(:scenario) { "lvm-types1.xml" }
+      let(:device_name) { "/dev/vg0/normal2" }
+      let(:vg_name) { "vg0" }
+
+      it "shows a detailed confirmation message including all the snapshots" do
+        expect(subject).to receive(:confirm_recursive_delete)
+          .with(device, anything, anything, /normal2/)
+          .and_call_original
+
+        subject.run
       end
     end
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/yast2-storage-ng-4.3.8/test/y2storage/autoinst_issues/no_disk_test.rb 
new/yast2-storage-ng-4.3.9/test/y2storage/autoinst_issues/no_disk_test.rb
--- old/yast2-storage-ng-4.3.8/test/y2storage/autoinst_issues/no_disk_test.rb   
2020-06-03 17:02:05.000000000 +0200
+++ new/yast2-storage-ng-4.3.9/test/y2storage/autoinst_issues/no_disk_test.rb   
2020-06-26 16:06:40.000000000 +0200
@@ -35,7 +35,7 @@
       let(:device_name) { nil }
 
       it "returns a general description of the issue" do
-        expect(issue.message).to include "Not suitable disk"
+        expect(issue.message).to include "No suitable disk"
       end
     end
 
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/yast2-storage-ng-4.3.8/test/y2storage/boot_requirements_errors_test.rb 
new/yast2-storage-ng-4.3.9/test/y2storage/boot_requirements_errors_test.rb
--- old/yast2-storage-ng-4.3.8/test/y2storage/boot_requirements_errors_test.rb  
2020-06-03 17:02:05.000000000 +0200
+++ new/yast2-storage-ng-4.3.9/test/y2storage/boot_requirements_errors_test.rb  
2020-06-26 16:06:40.000000000 +0200
@@ -375,6 +375,16 @@
                 match(/partition of type BIOS Boot/)
               )
             end
+
+            context "but it is running in a XEN guest" do
+              before do
+                allow(Yast::Arch).to receive(:is_xenU).and_return(true)
+              end
+
+              it "does not contain warnings" do
+                expect(checker.warnings).to be_empty
+              end
+            end
           end
 
           context "and there is a grub partition in the system" do
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' old/yast2-storage-ng-4.3.8/test/y2storage/lvm_vg_test.rb 
new/yast2-storage-ng-4.3.9/test/y2storage/lvm_vg_test.rb
--- old/yast2-storage-ng-4.3.8/test/y2storage/lvm_vg_test.rb    2020-06-03 
17:02:05.000000000 +0200
+++ new/yast2-storage-ng-4.3.9/test/y2storage/lvm_vg_test.rb    2020-06-26 
16:06:40.000000000 +0200
@@ -25,8 +25,9 @@
   using Y2Storage::Refinements::SizeCasts
 
   before do
-    fake_scenario("complex-lvm-encrypt")
+    fake_scenario(scenario)
   end
+  let(:scenario) { "complex-lvm-encrypt" }
 
   subject(:vg) { Y2Storage::LvmVg.find_by_vg_name(fake_devicegraph, vg_name) }
 
@@ -118,4 +119,19 @@
       expect(vg.thin_lvm_lvs.map(&:lv_name)).to_not include("pool1", "pool2")
     end
   end
+
+  describe "#delete_lvm_lv" do
+    context "in a VG with snapshots" do
+      let(:scenario) { "lvm-types1.xml" }
+
+      it "deletes the snapshots of the removed LV" do
+        normal1 = vg.lvm_lvs.find { |lv| lv.lv_name == "normal1" }
+
+        expect(vg.lvm_lvs.map(&:lv_name)).to include("normal1", "snap_normal1")
+        vg.delete_lvm_lv(normal1)
+        expect(vg.lvm_lvs.map(&:lv_name)).to_not include "normal1"
+        expect(vg.lvm_lvs.map(&:lv_name)).to_not include "snap_normal1"
+      end
+    end
+  end
 end
diff -urN '--exclude=CVS' '--exclude=.cvsignore' '--exclude=.svn' 
'--exclude=.svnignore' 
old/yast2-storage-ng-4.3.8/test/y2storage/proposal/lvm_creator_test.rb 
new/yast2-storage-ng-4.3.9/test/y2storage/proposal/lvm_creator_test.rb
--- old/yast2-storage-ng-4.3.8/test/y2storage/proposal/lvm_creator_test.rb      
2020-06-03 17:02:05.000000000 +0200
+++ new/yast2-storage-ng-4.3.9/test/y2storage/proposal/lvm_creator_test.rb      
2020-06-26 16:06:40.000000000 +0200
@@ -184,6 +184,22 @@
           expect(lv_names).to include "lv1"
         end
 
+        context "if there are LVs with snapshots" do
+          let(:scenario) { "lvm-types1.xml" }
+          let(:pv_partitions) { [] }
+
+          # Ensure we have to delete some volumes, but not all
+          before { volumes.last.min = 7.GiB }
+
+          it "deletes the snapshots of the removed LVs " do
+            devicegraph = creator.create_volumes(vg, pv_partitions).devicegraph
+            reused_vg = devicegraph.lvm_vgs.first
+            lv_names = reused_vg.lvm_lvs.map(&:lv_name)
+            expect(lv_names).to_not include "normal1"
+            expect(lv_names).to_not include "snap_normal1"
+          end
+        end
+
         context "and make policy is set to :keep" do
           before { vg.make_space_policy = :keep }
 


Reply via email to