[qubes-users] Backup verification error

2018-10-17 Thread Kelly Dean
On Qubes 4.0, I did a full backup to an external hard drive using the standard 
backup utility, which completed successfully. 225GB total, compressed 130GB, 
took about 12 hours.

Then I tried to verify it (restore with verify-only option), and watched it for 
a few minutes to make sure it was running ok, then left it alone. Last line in 
the message window at the time was:
Extracting data: 209.6 GiB to restore

Came back a few hours later and the next line was:
Finished with errors!

And there was a dialog box:

[Dom0] Backup error!
ERROR: failed to decrypt
/var/tmp/restorexnuw0a8p/vm31/private.img.034.enc: b'scrypt:
Decrypting file would take too much CPU time\n'
Partially restored files left in /var/tmp/restore_*, investigate them
and/or clean them up
OK

So now I don't know if I have a good backup. The error message also leaves me 
doubtful that I'd be able to restore the backup even if it is good.

The indicated file doesn't exist in dom0. Neither does any other 
/var/tmp/restore* file. Googling the message (Decrypting file would take too 
much CPU time) finds nothing.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/65fJMu1t2Tij2ncYtCIPMXZngqVKtVVw1ULOeAOmJPX%40local.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] Incredible HD thrashing on 4.0

2018-08-13 Thread Kelly Dean


Chris Laprise writes:

> Can Qubes access all of that RAM? Look at the total_memory figure from
> 'xl info'.

Yes, it can.

One additional data point: after the typical one-minute boot time for a qube, 
it's using no swap space, and dom0 is also using no swap space, even though 
both do have swap enabled. So, memory pressure isn't the problem.

Some other qubes are using some swap space, but they were idle while I was 
timing the boot of the test qube.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/aQVRUWloSCLq03vTD0ptTdVuEuA8vtweaGhceAMBOyK%40local.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] Extreme CPU usage by dom0 on Qubes 4.0

2018-08-13 Thread Kelly Dean


awokd writes:

> I haven't had any of those problems you listed. Only thing I can think of
> are the basics- have you installed anything in dom0? Did you install a
> release candidate of Qubes, then upgrade? Do you have enough RAM to
> comfortably support the amount of running VMs?

Nothing unusual in dom0. Fresh install of Qubes 4.0 final release, 4 months 
ago. Running 5 sys qubes, and 7 app qubes with some editors, browsers, PDF 
viewers, and xterms. Have 16GB RAM, which is obscenely gargantuan overkill for 
my workload, though with Qubes it's just adequate.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/DWVdBm1dzAfJWQ80BjvW5xIHkL7VaVJSMy8Hx875EZo%40local.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] Incredible HD thrashing on 4.0

2018-08-13 Thread Kelly Dean


Unman writes:

> I don't recognise this on a somewhat under powered laptop with HDD -
> definitely not "minutes at a time". Is there something significant about
> the disks that you cite, or are those just examples?

Nothing significant about #21 in particular. The thrashing procs are whichever 
ones handle the virtual disks for a qube that's thrashing.

System is a core i3 with 16GB RAM, and HD with about 100MB/s throughput.

Worst problems seem to be from swapping, and random times when I start a qube.

The swapping is unpredictable, but here's a typical best-case result for 
starting an ordinary app qube with fedora-28 template:
T+0: start qube. Brief burst of CPU & disk activity for a second, then mostly 
idle for 20 seconds.
T+20: heavy sustained disk thrashing starts.
T+40: pop-up notification that the domain has started. Thrashing continues.
T+60: thrashing abruptly stops.
That's only 1 minute, but when I'm unlucky, it can be several minutes.

How does that compare with your experience?

I don't have anything custom configured to run in the qube at startup, so all 
the activity is from the template's defaults. Nothing special about fedora-28 
either; I get similar results from debian-9 and whonix-ws.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/1nRO277qLRinbOTksjqYuY6FdfuPxLkuajziB9A18Nr%40local.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Xenstore flakiness

2018-08-10 Thread Kelly Dean
On Qubes 4.0, I get intermittent bugs when using qvm-prefs.

One example:
[user@dom0 ~]$ qvm-prefs sys-whonix netvm
sys-firewall
[user@dom0 ~]$ qvm-prefs sys-whonix netvm ""
usage: qvm-prefs [-h] [--verbose] [--quiet] [--help-properties] [--get]
 [--set] [--default]
 VMNAME [PROPERTY] [VALUE]
qvm-prefs: error: no such property: 'netvm'
[user@dom0 ~]$ qvm-prefs sys-whonix netvm sys-firewall
usage: qvm-prefs [-h] [--verbose] [--quiet] [--help-properties] [--get]
 [--set] [--default]
 VMNAME [PROPERTY] [VALUE]
qvm-prefs: error: no such property: 'netvm'
[user@dom0 ~]$

Rebooting fixed it.


Another example:
[user@dom0 ~]$ qvm-prefs sys-firewall netvm ""

xenstored started taking 100% CPU in dom0, and qubes that connect to 
sys-firewall couldn't start:
[user@dom0 ~]$ qvm-create testvm --label red
[user@dom0 ~]$ qvm-prefs testvm netvm sys-firewall
[user@dom0 ~]$ qvm-start testvm
Start failed: internal error: libxenlight failed to create new domain 'testvm', 
see /var/log/libvirt/libxl/libxl-driver.log for details

Relevant log entries:
2018-04-18 01:40:58.186+: libxl: 
libxl_device.c:1081:device_backend_callback: unable to add device with path 
/local/domain/5/backend/vif/7/0
2018-04-18 01:40:58.186+: libxl: 
libxl_create.c:1512:domcreate_attach_devices: unable to add nic devices
2018-04-18 01:41:08.290+: libxl: 
libxl_device.c:1081:device_backend_callback: unable to remove device with path 
/local/domain/5/backend/vif/7/0
2018-04-18 01:41:08.304+: libxl: libxl.c:1669:devices_destroy_cb: 
libxl__devices_destroy failed for 7

To fix this, I had to kill sys-firewall (couldn't shut it down), and restart it.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/VKB7RNkg3RwUO1WRmQQXivZcMz2WAFkmvSdSSMi1KLX%40local.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Extreme CPU usage by dom0 on Qubes 4.0

2018-08-10 Thread Kelly Dean
For no apparent reason, dom0 suddenly starting consuming practually all CPU 
power, and the system is unusably sluggish. Have a dual core Core-i3, and 
xentop says dom0 is taking anywhere from 112% to 201% CPU.

top in dom0 shows load average ranging from 2 to 3. Occasionally qubesd, 
qvm-pool, or qubes-qube-manager are shown taking around 25% CPU, but usually 
nothing over 10%.

I commonly have disk thrashing on this system, but right now I have practically 
no disk activity, so the problem must be something else. I also paused my 
non-essential qubes, but to no avail. The problem is in dom0.

I've been running 4.0 for 4 months, and have had other problems (spontaneous 
rebooting, which I also had on 3.2, and excessive HD thrashing, new to 4.0), 
but this is the first time I've ever had dom0 make the system completely 
unusable, with no solution other than rebooting.

Note, I'm posting several messages today about other problems too, just because 
I've become aggravated enough. I don't think they have anything to do with each 
other.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/NshbHkD1xG6ok8YHLZieyWeM680Ct9C3A1bH5FRT3dz%40local.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Incredible HD thrashing on 4.0

2018-08-10 Thread Kelly Dean
Has anybody else used both Qubes 3.2 and 4.0 on a system with a HD, not SSD? 
Have you noticed the disk thrashing to be far worse under 4.0? I suspect it 
might have something to do with the new use of LVM combining snapshots with 
thin provisioning.

The problem seems to be triggered by individual qubes doing ordinary bursts of 
disk access, such as loading a program or accessing swap, which would normally 
take just a few seconds on Qubes 3.2, but dom0 then massively multiplies that 
I/O on Qubes 4.0, leading to disk thrashing that drags on for minutes at a 
time, and in some cases, more than an hour.

iotop in dom0 says the thrashing procs are e.g. [21.xvda-0] and [21.xvda-1], 
reading the disk at rates ranging from 10 to 50 MBps (max throughput of the 
disk is about 100). At this rate, for how prolonged the thrashing is, it could 
have read and re-read the entire virtual disk multiple times over, so there's 
something extremely inefficient going on.

Is there any solution other than installing a SSD? I'd prefer not to have to 
add hardware to solve a software performance regression.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/mnKAP70pGMcPXfgVn5L09B0qrQmLBjq1609lLW8POGT%40local.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Spontaneous rebooting

2018-08-10 Thread Kelly Dean
Am I the only one having a problem with Qubes spontaneously rebooting on Intel 
hardware? Only other reports I see are about AMD problems, but I'm using an 
Intel Core i3.

Happens every few weeks. Sometimes just 1 or 2 weeks, sometimes 5 or 6. Got it 
on Qubes 3.2, and now 4.0 too (new installation, not upgrade), multiple times.

Unlikely to be a hardware problem. The system passed both memtest86 and a 
multi-day mersenne prime stress test. And other OSes tested on this hardware 
before I switched to Qubes, including Debian and Windows, never had a problem.

The rebooting seems completely random. No apparent trigger, and no warning. 
Acts like an instant hard reset. Sometimes even when the system is idle, and I 
haven't touched the console for hours.

It's wearingly inevitable enough that I don't even bother intentionally 
rebooting after system updates anymore, in order to minimize how many reboots I 
have to deal with (setting my workspace back up is an ordeal), because I know 
the system will end up spontaneously rebooting a week or two later anyway.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/7r15foDK4EKtC8dX4QBmMKrXGWAaC1OpjAxSSEwGaFQ%40local.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] USB VM based on fedora-26 doesn't pass block devices

2018-02-23 Thread Kelly Dean

awokd writes:
> I wonder if this might be related to a recent patch in testing. Are both
> your dom0 and templates on the same repository (current vs. testing) and
> updated? A recent patch also required a reboot once both were updated.

Both on current, and both updated, and rebooted since last update.

Anyway, problem solved. I plugged the USB device into a different port, and it 
worked (I got xvdi in the appVM). Then I detached and moved it back to the port 
where I was having the problem, and this time it worked there too. Aargh, 
heisenbug.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/qQD4CRAJnbV7rBNHPdT5nuwU6Jd23TGISYpAaNcwfsH%40local.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] USB VM based on fedora-26 doesn't pass block devices

2018-02-20 Thread Kelly Dean
I'm getting the same bug as reported at 
https://github.com/QubesOS/qubes-issues/issues/2018

The bug was originally reported against r3.1 and fedora-23-minimal, and fixed 
in r3.2 and fedora-24-minimal. However, I'm getting it on r3.2, using fedora-26 
(full, not minimal) for both my USB VM and my appVM. Both Qubes and the 
template are fully updated. Rebooting the system doesn't fix it. Trying a 
different appVM also doesn't fix it.

qvm-block -l says the USB device is attached to the appVM. But the appVM has no 
/dev/xvdi

xl block-list shows the appVM's four standard block devices with state 4, and 
it shows a fifth device with state 3. This is the same result that the OP got. 
I don't know what the state numbers 3 and 4 mean, and neither does google. The 
man page for xl also says nothing about them.

/var/log/xen/xen-hotplug.log reveals nothing helpful.

The OP solved his problem by installing perl in the template. But I'm using the 
full fedora-26, which already has perl installed.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/sMWqGGwq1a5RxCy9341tNKWWshqEMIsHd3trjW9Ys1O%40local.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Keyboard AltGr bug

2017-07-31 Thread Kelly Dean
In Qubes 3.2 with i3, in any qube, regardless of which template I use, and 
regardless of which physical keyboard I use, the right Alt key is useless when 
mapped as AltGr, because it generates spurious Alt_L events.

With a standard American keyboard (or Polish one; they're equivalent except for 
the markings) and American localization, start a new qube, run xev, press and 
release right Alt, and notice you get KeyPress and KeyRelease events for 
keycode 108 (keysym Alt_R).

Now, do this:
xmodmap -e 'remove mod1 = Alt_R'
xmodmap -e 'keycode 108 = Mode_switch'

Then run xev again, and notice when you press right Alt, you get a KeyPress 
event for keycode 108 (keysym Mode_switch), as expected. However, when you 
release right Alt, you get two events: a spurious KeyPress event for keycode 64 
(keysym Alt_L), followed by a KeyRelease event for keycode 108 (keysym 
Mode_switch). You should only get the latter.

Why does that happen?

On another system running plain Debian instead of Qubes, using the same 
keyboards, I don't get the spurious KeyPress for keycode 64.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/IxfIgxhVFiQfrypF6duH5I5VqsylWsAYC0IKwUkDLwY%40local.
For more options, visit https://groups.google.com/d/optout.