I posted the problem description to Qubes issue tracker last month - appending 
at that time closed issue #3964 - 
https://github.com/QubesOS/qubes-issues/issues/3964 
<https://github.com/QubesOS/qubes-issues/issues/3964> - VMs cannot start. 

Quoting my post here (see the link for whole thread): 

...

>It seems I hit similar problem on Thursday. I installed latest dom0 updates 
>(including e.g. qubes-manager-4.0.26-1; ), rebooted, tried to update all 
>relevant templates using the new, now (after this last update) functioning, 
>qubes-update-gui. Update went fine (except for whonix-14 - a problem with a 
>fix already described elsewhere), but after finishing it, while I was trying 
>to restart all AppVMs, system (dom0) stopped responding on (Debian 9 based) 
>network VM (sys-net-deb) startup (used to happen in the past on each start 
>with NIC assigned without no strict reset AND permissive mode specified - a 
>problem already solved, but, as I assumed, maybe somehow reintroduced by some 
>setting change resulting from the last update).
>
>I forced (HW button) system restart and, seeing that sys-net-deb startup on 
>Qubes boot failed, reselected sys-net-deb assigned devices and reenabled 
>no-strict-reset for these devices using Qube Manager. On the next restart I 
>finally noticed it's not just sys-net-deb that isn't starting.
>
>Now, even after few succeeding restarts, no VM (except dom0) is starting.
>
>I get these errors on each attempted VM startup:
For VMs connected to sys-net-deb through sys-firewall:
>Domain sys-net-deb failed to start:
>Logical Volume "vm-sys-net-deb-volatile" already exists in volume group 
>"qubes_dom0"
or its vm-sys-net-deb-root-snap or sys-net-deb-private-snap variant.
>
>sys-usb (no network, USB controller assigned) throws similar errors, but 
>referencing itself instead of sys-net-deb.
>
>For most VMs without network VM assigned:
Domain <JustTryingToStartDomainName> has failed to start: device-mapper:message 
ioctl on (253:3) failed: File exists
> Failed to process thin pool message "create_snap 1666 1649".
> Failed to suspend qubes_dom0/pool00 with queued messages.
>Any attempt to sudo lvremove qubes_dom0/vm-<SomeName>-volatile, -root-snap or 
>-private-snap to date ended unsuccessfully with:
>
>device-mapper:message ioctl on (253:3) failed: File exists
> Failed to process thin pool message "create_snap 1666 1649".
> Failed to suspend qubes_dom0/pool00 with queued messages.
> Failed to update pool qubes_dom0/pool00.
>>lvs output:
>pool00 Data%: 74.56, Meta%: 50.37
>one of the appvms -private and -private-*-back Data% at around 98% (I don't 
>assume it means anything - it hasn't been run for ages and it's just an AppVM)
>>Output for different VMs:
>for most AppVMs there is only vm-<Name>-private and corresponding -*-back entry
for AppVMs with assigned devices (sys-net-deb, sys-usb) there are -volatile, 
-private-snap and -root-snap entries as well; the same applies to whonix-gw-14 
(although it has no device attached (and never had), it is a template and is 
set to standard PVH virtualization mode) and two (random?) AppVMs (one based on 
Debian 9, one on Fedora 28) which I don't remember running recently, nor having 
ever any devices assigned
>for most templates there are just -private, -root and root-*-back entries


I'd like to ask here for some advice regarding recovery of my system from the 
described (still unchanged) state without data loss. I have minimal/no 
knowledge of Xen and LVM Thin Provisioning administration, so I'm pretty 
cluess. Thanks for any help. 

--
Whatevrr

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/LY1XqMd--3-1%40tuta.io.
For more options, visit https://groups.google.com/d/optout.

Reply via email to