Re: [qubes-users] Restored GPG domain from Q4.0 to Q4.1, won't start (xenbus_probe_frontend?)

2023-03-28 Thread Thomas Kerin
Transferring the data got me back in action, so I'm quite happy I had the
old system running and didn't have to muck about to regain access to the
files

I've normally had good enough luck with the qubes backup and restore, but
it does seem like Qubes from older systems might run into incompatibilities
if imported into a newer Qubes, even if the corresponding template is
restored too

Does anyone recognize the error or maybe know why the VM failed to boot
under Qubes 4.1? Could we check for such conditions during the Qubes
Restore?

On Tue, Mar 28, 2023 at 1:45 PM sambucium  wrote:

> I restored my system from a laptop running Qubes 4.0 recently
>
> The template for my gpg domain is based on debian-10. I restored both the
> gpg domain and the template into the new system, but the gpg domain won't
> start
>
> It seems to get stuck waiting for the xvdd device to attach to the VM.
>
> /var/log/xen/console/guest-gpg.log
> blkfront: xvda: flush diskcache: enabled; persistent grants;: enabled;
> indirect descriptors: enabled; bounce buffer: enabled
> xvda: xvda1 xvda2 xvda3
> blkfront: xvdb: flush diskcache: enabled; persistent grants;: enabled;
> indirect descriptors: enabled; bounce buffer: enabled
> blkfront: xvdc: flush diskcache: enabled; persistent grants;: enabled;
> indirect descriptors: enabled; bounce buffer: enabled
> xenbus_probe_frontend: Waiting for devices to initialize:
> 25s..20s..15s...10s...5s...0s...
>
> before the VM shuts down
>
>
> If I boot another VM (ssh /lan) which uses the same template, I get this
> instead
>
> blkfront: xvda: flush diskcache: enabled; persistent grants;: enabled;
> indirect descriptors: enabled; bounce buffer: enabled
> xvda: xvda1 xvda2 xvda3
> blkfront: xvdb: flush diskcache: enabled; persistent grants;: enabled;
> indirect descriptors: enabled; bounce buffer: enabled
> blkfront: xvdc: flush diskcache: enabled; persistent grants;: enabled;
> indirect descriptors: enabled; bounce buffer: enabled
> blkfront: xvdd: flush diskcache: enabled; persistent grants;: enabled;
> indirect descriptors: enabled; bounce buffer: enabled
> ...
> ..
> Waiting for /dev/xvdd device...
> /dev/xvdd: Can't open blockdev
> ...
> VM boots anyway
>
>
> I googled that line and see references to /var/log/xen/xen-hotplug.log -
> this file is empty
>
> I'm wondering how old that VM is, maybe its come from Qubes 3.2 -> 4.0 ->
> 4.1 and finally it's running into bother?
>
> Other VM's seem to boot from that template which is odd, I'm going to try
> transfer the data to a fresh qube, transfer THAT to the new system and
> hopefully it works, but otherwise I'm at a loss here
>
> --
> You received this message because you are subscribed to the Google Groups
> "qubes-users" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to qubes-users+unsubscr...@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/qubes-users/9ad43683-fe7a-48bf-8112-aad8914735e1n%40googlegroups.com
> 
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/CAHv%2Btb5HnfX-dr6FBckOioLO4kzX7SMdmK6Dfz4Dp9CgZhkobA%40mail.gmail.com.


Re: [qubes-users] fixing LVM corruption, question about LVM locking type in Qubes

2019-07-29 Thread thomas . kerin


My current problem is this error (triggerable by running `lvm lvchange -ay 
-v qubes_dom0/vm-vault-personal-docs-private`)
It happens with some other affected vms.

Activating logical volume qubes_dom0/vm-vault-personal-docs-private 
exclusively.
activation/volume_list configuration setting not defined: Checking only 
host tags for qubes_dom0/vm-vault-personal-docs-private.
Loading qubes_dom0-pool00_tdata table (253:2)
Suppressed qubes_dom0-pool00_tdata (253:2) identical table reload.
Loading qubes_dom0-pool00_tmeta table (253:1)
Suppressed qubes_dom0-pool00_tmeta (253:1) identical table reload.
Loading qubes_dom0-pool00-tpool table (253:3)
Suppressed qubes_dom0-pool00-tpool (253:3) identical table reload.
Creating qubes_dom0-vm--vault--personal--docs--private
Loading qubes_dom0-vm--vault--personal--docs--private table (253:52)
  device-mapper: reload ioctl on (253:52) failed: No data available
Removing qubes_dom0-vm--vault--personal--docs--private (253:52)


Btw, I've scanned through the files in /etc/lvm/archive. I wasn't sure if I 
should follow your advice there as that command requires --force to work 
with thin provisioning and I've seen warnings about this online. I see the 
files contain references to the volumes that lvs doesnt show.

I tested thin_check on qubes_dom0/pool00_meta0 volume (created by lvmodify 
--recover qubes_dom0), and get the following:
examining superblock
examining devices tree
   missing devices: [1, 747]
  too few entries in btree_node: 41, expected at least 42(max 
entries=126)

Running thin_check on meta1 and meta2 (created by running lvchange 
--recover qubes_dom0 a further 2 times) doesn't yield anything major:
examining superblock
examining devices tree
examining mapping tree
checking space map counts

I've followed a procedure to get a metadata snapshot (I couldn't directly 
access _tmeta using normal tools): https://serverfault.com/a/971620
and used thin_dump on the _meta0, _meta1, and _meta2 volumes created by 
`lvchange --recover`.

I diff'd the tmeta file against the others, and it seems only the first 
line is different?
1c1
< 
---
> 

so the pools tmeta has nr_data_blocks = 0.

Maybe my data is still there but the metadata is wrong?

On Monday, 29 July 2019 16:37:26 UTC, thoma...@gmail.com wrote:

> Oh, forgive me, no not all vms are present in lvs. 

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/ac1978a4-68da-4960-bd69-8c4fdadd27ce%40googlegroups.com.


Re: [qubes-users] fixing LVM corruption, question about LVM locking type in Qubes

2019-07-29 Thread thomas . kerin
Oh, forgive me, no not all vms are present in lvs. 

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/1a8a95a3-25a9-42c4-8051-60d5342245b4%40googlegroups.com.


Re: [qubes-users] fixing LVM corruption, question about LVM locking type in Qubes

2019-07-29 Thread Thomas Kerin
Thanks Chris for your response!

On Mon, 29 Jul 2019, 4:25 pm Chris Laprise,  wrote:

> On 7/29/19 10:19 AM, thomas.ke...@gmail.com wrote:
> > Thanks for your response.
> >
> > I have some which don't show as active - it's looking like some data
> loss..
> >
> > Something I am getting when I run
> > Lvconvert --repair qubes_dom0/pool00
> >
> >
> > WARNING: Sum of all thin volume sizes (2.67TiB) exceeds the size of thin
> pools and the size of whole volume group (931.02GiB)
> >
> > Is this something I can fix perhaps?
>
> This is normal. Thin provisioning usually involves over-provisioning,
> and that's what you're seeing. Most of our Qubes systems display this
> warning when using lvm commands.
>
>
Understood. Thanks!


> >
> > Also, I have some large volumes which are present. I've considered
> trying to remove them, but I might hold off until I get data off the active
> volumes first..
> >
> > I've run across the thin_dump / thin_check / thin_repair commands. It
> seems they're used under the hood by lvconvert --repair to check thin
> volumes.
> >
> > Is there a way to relate those dev_ids back to the thin volumes lvm
> can't seem to find?
>
> If 'lvs' won't show them, then I don't know precisely how. A long time
> ago, I think I used 'vgcfgrestore /etc/lvm/archive/' to
> resolve this kind of issue.
>
>
> Sorry, I mean, lvs does show them, I'm just wondering what it'll take to
show them as active again.

That directory seems to just have files from today!

I also recommend seeking help from the wider Linux community, since this
> is a basic Linux storage issue.
>
> I have spent the morning researching, and found a few posts on redhat.com
and some other sites describing how to repair the metadata.

The most common reason seems to be overflowing the metadata partition,
though mine is currently around 37%

Others (one qubes user) encountered this after power failure. I shut down
cleanly as far as I can tell this was a routine reboot..

And of course, a reminder there mishaps are a good reason to do the
> following:
>
> 1. After installation, at least double the size of your pool00 tmeta
> volume.
>
> 2. Perform regular backups (I'm working on a tool that can backup lvs
> much quicker than the Qubes backup tool).
>
I definitely agree with both, although seems unlikely to have been point
one in this case.

I'm fairly sure the main disk has about 50% free also

Backups are evidently a must.. I've screwed up qubes installs before, but
never lost data until maybe now. I know lvm was only adopted in R4.0,
everything else has been going so well with this install, but I had only
just recovered and organized several old disks worth of data so I'll be
gutted if I lost it and won't know why :/

I see a few people posting on the GitHub qubes-issues repo, one guy says 3
people in the past month have had this issue (or at least the same symptoms)

>
> --
>
> Chris Laprise, tas...@posteo.net
> https://github.com/tasket
> https://twitter.com/ttaskett
> PGP: BEE2 20C5 356E 764A 73EB  4AB3 1DC4 D106 F07F 1886
>

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/CAHv%2Btb7994bBVc-XZWBADke6r0znUC2b5sGaLfs%2BoVFOy0A_yA%40mail.gmail.com.


Fwd: [qubes-users] fixing LVM corruption, question about LVM locking type in Qubes

2019-07-29 Thread Thomas Kerin
Sorry, didn't send to list. See my response to Chris.

-- Forwarded message -
From: Thomas Kerin 
Date: Mon, 29 Jul 2019, 3:40 pm
Subject: Re: [qubes-users] fixing LVM corruption, question about LVM
locking type in Qubes
To: Chris Laprise 


Hi Chris,

Yes, I think I tried that once last night.

I notice it creates a qubes_dom0/pool_meta$N volume each time.

Note my earlier post (before I saw yours!) had a weird error about 2.6 tb
thin volume sizes exceeds size of pools and volume group

Output this time was:
WARNING: recovery of pools without pool metadata spare LV is not automated
WARNING: if everything works, remove qubes_dom0/pool00_meta2 volumWARNING:
Use pvmove command to move qubes_dom0/pool00_meta2 on the best fitting PV


Currently I have qubes_dom0/pool_meta0, 1, and 2.

On Mon, 29 Jul 2019, 3:18 pm Chris Laprise,  wrote:

> On 7/28/19 9:47 PM, 'awokd' via qubes-users wrote:
> > 'awokd' via qubes-users:
> >> thomas.ke...@gmail.com:
> >>> I've just encountered this issue, and I thought my problems were over
> >>> once I found this post..
> >>>
> >>> Fyi, previously lvscan on my system shown root, pool00, and every
> >>> volume but swap as inactive
> >>>
> >>> I followed your instructions, but the system still fails to boot.
> >>> I've run 'vgchange -ay' and o saw the following printed a number of
> >>> times.
> >>>
> >>> device-mapper: table 253:6: thin: Couldn't open thin internal device
> >>>device-mapper: reload ioctl on (253:6) failed: no data available
> >>>
> >>>
> >>> I ran 'lvscan' again, and this time some VMS were marked active, but
> >>> a number (root,various -back volumes, several -root volumes, etc)
> >>>
> >>> Really terrified everything is gone as I had just recovered from a
> >>> backup while my hardware got fixed, but I don't have the backup
> anymore.
> >>>
> >> Can't tell which post you're replying to, but I get the idea. The
> >> volumes you are most concerned about all end in --private. If you've
> >> gotten them to the point where they show as active, you can make a
> >> subdir and "sudo mount /dev/mapper/qubes_dom0-vm--work--private
> >> subdir" for example, copy out the contents, umount subdir and move on
> >> to the next. You can ignore --root volumes, since installing default
> >> templates will recreate. If you can't get the --private volumes you
> >> want to show as active, I'm afraid recovering those is beyond me.
> >>
> > Also, if you can't get a --private volume active, try its
> > --private--##--back equivalent.
>
> Did you run "lvm lvconvert --repair qubes_dom0/pool00"? I think that
> would be one of the first things you do when the underlying thin device
> fails.
>
> If it needs additional space, you could delete the swap lv, then re-add
> it later.
>
> --
>
> Chris Laprise, tas...@posteo.net
> https://github.com/tasket
> https://twitter.com/ttaskett
> PGP: BEE2 20C5 356E 764A 73EB  4AB3 1DC4 D106 F07F 1886
>

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/CAHv%2Btb6-PxnOuLzzEaPPFdfYx3aBJT77fuXgzfkpw6K2m3nzLw%40mail.gmail.com.


Re: [qubes-users] fixing LVM corruption, question about LVM locking type in Qubes

2019-07-29 Thread thomas . kerin
Thanks for your response.

I have some which don't show as active - it's looking like some data loss..

Something I am getting when I run 
Lvconvert --repair qubes_dom0/pool00


WARNING: Sum of all thin volume sizes (2.67TiB) exceeds the size of thin pools 
and the size of whole volume group (931.02GiB)

Is this something I can fix perhaps?

Also, I have some large volumes which are present. I've considered trying to 
remove them, but I might hold off until I get data off the active volumes 
first..

I've run across the thin_dump / thin_check / thin_repair commands. It seems 
they're used under the hood by lvconvert --repair to check thin volumes.

Is there a way to relate those dev_ids back to the thin volumes lvm can't seem 
to find?

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/1e4c4985-d981-4fc8-b701-0b2636f52134%40googlegroups.com.


Re: [qubes-users] fixing LVM corruption, question about LVM locking type in Qubes

2019-07-28 Thread thomas . kerin
I've just encountered this issue, and I thought my problems were over once I 
found this post..

Fyi, previously lvscan on my system shown root, pool00, and every volume but 
swap as inactive 

I followed your instructions, but the system still fails to boot. I've run 
'vgchange -ay' and o saw the following printed a number of times.

device-mapper: table 253:6: thin: Couldn't open thin internal device
  device-mapper: reload ioctl on (253:6) failed: no data available


I ran 'lvscan' again, and this time some VMS were marked active, but a number 
(root,various -back volumes, several -root volumes, etc)

Really terrified everything is gone as I had just recovered from a backup while 
my hardware got fixed, but I don't have the backup anymore.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/30c815fb-299c-4902-bdc3-279e13e6c48f%40googlegroups.com.


[qubes-users] Feature discussion: creating storage volumes

2019-03-27 Thread thomas . kerin
I wanted to ask about including a feature in Qubes to create storage volumes on 
disk which can then be assigned to a VM.

Presently, I'm writing an ISO to a USB, by downloading it in one VM, and 
writing it to disk in another. Instead of copying the data, I'd rather write it 
directly to a volume I can later mount on the other VM.

I imagine it'd go into Qubes Manager, listing volumes, their usage, and where 
each is attached.

It's of course possible with a second drive, or making space on my drive for 
other volume, but I'd much rather they were managed they qubes.

I haven't found a discussion on this before, please forgive me if I missed it!

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/112dcd59-dee0-4d60-8916-0b8476c8f384%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] Restore from R4.0 backup failing

2019-03-25 Thread Thomas Kerin
I reinstalled Qubes R4.0.1, and during the reboot-and-setup phase I left
all options unchecked. Qubes installed some template VM's, but didn't add
sys-net/sys-usb/sys-firewall as desired.

This time I started with a small number of VM's and it worked - I am trying
more and I don't get the original error.

I should summarize what I tried:
 - fresh install, accepted setup default vms, setup net vm, usb vm: - got
errors during restore
 - fresh install, no checkboxes ticked except for last - 'dont set anything
up' - got errors during restore
 - fresh install, no checkboxes ticked at all - backups working!

I'm wondering could I have clicked 'Dont set anything up' the first time
around.. I didn't record logs from that session, but guessing sys-net /
sys-usb having their devices attached may conflict with the restore of
similar VM's from an old machine?

On Mon, Mar 25, 2019 at 10:19 PM  wrote:

> I installed a new SSD in my laptop, so I created a backup of my old one
> and tried to restore it into a fresh install of Qubes R4.0.1 (the old
> system was Qubes R4.0).
>
> I'm receiving errors when I try to restore from the backup - 'Error
> restoring VM: empty response from qubesd - check journalctl for details'. I
> first tried restoring all VM's in the backup, before trying just 'sys-net'
> and 'fedora-29'
>
> I have run the restore in 'verify only' mode and it passes.
>
> I checked journalctl, and I can see some python tracebacks:
>
> The first error is regarding sys-net:
> "AttributeError: 'TemplateVM' object has no attribute '_qubesprop_kernel'"
> followed by:
> 'TemplateVM' object has no attribute: '_qubesprop_kernel'
> followed by:
> 'Qubes' object has no attribute 'qubesprop_default_kernel'
> followed by:
> property 'default_kernel' have no default
>
> I checked out the global system settings, which at indicates my new
> systems default kernel is '4.14.74-1'.
>
> There was another error also, while processing fedora-29:
> "'Template VM' object has no attribute '_qubesprop_kernel'"
> followed by:
> "'Template VM' object has no attribute 'template'"
> followed by
> "'Qubes' object has no attribute: 'qubesprop_default_kernel'"
>
> I'm not sure where to begin with fixing this, but I'm really glad I still
> have the original drive :)
>
> One thing to note is in the reboot-and-setup phase I and clicked "Don't
> configure anything". Could my new system be missing configuration settings,
> or are the python errors related to the contents of the backup? I checked
> this because first time, I accepted the defaults and found it hard to
> remove the colliding VM's.
>
> I think I'll try installing again and letting qubes do some of that
> configuration and see how that goes.
>
> I notice the other similar sounding report seems to be due to the
> passphrase, and perhaps is not related to my issue.
>
> --
> You received this message because you are subscribed to a topic in the
> Google Groups "qubes-users" group.
> To unsubscribe from this topic, visit
> https://groups.google.com/d/topic/qubes-users/mQGxtWCbS0k/unsubscribe.
> To unsubscribe from this group and all its topics, send an email to
> qubes-users+unsubscr...@googlegroups.com.
> To post to this group, send email to qubes-users@googlegroups.com.
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/qubes-users/b851e9be-fc19-4307-afff-3b55950c7aed%40googlegroups.com
> .
> For more options, visit https://groups.google.com/d/optout.
>

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/CAHv%2Btb5F11hSyxv1yKmNs6g57e6Oac%2BgYGUBmFtBuFbNyRXsww%40mail.gmail.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Restore from R4.0 backup failing

2019-03-25 Thread thomas . kerin
I installed a new SSD in my laptop, so I created a backup of my old one and 
tried to restore it into a fresh install of Qubes R4.0.1 (the old system was 
Qubes R4.0).

I'm receiving errors when I try to restore from the backup - 'Error restoring 
VM: empty response from qubesd - check journalctl for details'. I first tried 
restoring all VM's in the backup, before trying just 'sys-net' and 'fedora-29'

I have run the restore in 'verify only' mode and it passes.

I checked journalctl, and I can see some python tracebacks:

The first error is regarding sys-net:
"AttributeError: 'TemplateVM' object has no attribute '_qubesprop_kernel'"
followed by:
'TemplateVM' object has no attribute: '_qubesprop_kernel'
followed by:
'Qubes' object has no attribute 'qubesprop_default_kernel'
followed by:
property 'default_kernel' have no default

I checked out the global system settings, which at indicates my new systems 
default kernel is '4.14.74-1'. 

There was another error also, while processing fedora-29:
"'Template VM' object has no attribute '_qubesprop_kernel'"
followed by:
"'Template VM' object has no attribute 'template'"
followed by
"'Qubes' object has no attribute: 'qubesprop_default_kernel'"

I'm not sure where to begin with fixing this, but I'm really glad I still have 
the original drive :)

One thing to note is in the reboot-and-setup phase I and clicked "Don't 
configure anything". Could my new system be missing configuration settings, or 
are the python errors related to the contents of the backup? I checked this 
because first time, I accepted the defaults and found it hard to remove the 
colliding VM's. 

I think I'll try installing again and letting qubes do some of that 
configuration and see how that goes. 

I notice the other similar sounding report seems to be due to the passphrase, 
and perhaps is not related to my issue.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/b851e9be-fc19-4307-afff-3b55950c7aed%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Fix large mouse cursor in dom0

2016-12-13 Thread Thomas Kerin
Hi all,


Sorry for this one, I somehow have made the mouse cursor become twice as
large whenever its moved over a dom0 window (Qubes Manager, system
settings, etc).

I've explored the settings menu and found some likely settings, but no luck.

Is there a place I can safely nuke to get back to the defaults?

Thanks,

Thomas

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/b8a9bad1-a400-f23f-5b03-d7bebfc93bac%40thomaskerin.io.
For more options, visit https://groups.google.com/d/optout.