[qubes-users] USB stick issue

2021-01-06 Thread Ulrich Windl

Hi!

Maybe it's related to recent updates, or my computer is starting to die: 
Anyway: Today I had plugged in my USB stick and attached it successfully 
to "vault". I had opened a file from it. The suddenly within one second, 
I saw the stick being disconnected and reconnected, and the "vault" 
failed to write the file.


Questions:
1) Is that disconnect expected?
2) Is it expected the a disconnect/reconnect uses a different disk (xvdj 
vs. xvdi)?
3) Is it expected that the partitions appear twice in the file manager 
(see attachment)?


Regards,
Ulrich

--
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/79bcb932-1749-447b-04a5-358d31e0972d%40rz.uni-regensburg.de.


Re: [EXT] Re: [qubes-users] wireless " intruder "

2021-01-06 Thread Ulrich Windl

On 1/3/21 2:24 PM, haaber wrote:
...

Maybe nmap causes the mirage death. That wouldn't be a good job by
mirage though and should be reported as bug to the dev.

I thought that, too. How would verify it is really nmap? As a test, I
scanned two phones in my wifi (in the same dispVM), without any trouble,
using the same command. I re-scanned the offensive object, 181 seconds
later mirage is dead again. Fascinating.


Are there logs (the famous "last words")?

...

--
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/0e4159f4-8341-24cd-6cd7-141dd045da1c%40rz.uni-regensburg.de.


Re: [EXT] Re: [qubes-users] Disable lock screen / screenshot question

2021-01-06 Thread Ulrich Windl

On 1/2/21 9:31 PM, Andrew David Wong wrote:

On 1/2/21 9:05 AM, Ulrich Windl wrote:

On 12/30/20 8:20 AM, Andrew David Wong wrote:

On 12/29/20 10:02 AM, Ulrich Windl wrote:

[...]
When trying, it seems my Dom0 does not have a file manager in the 
menu. I had to run "thunar" manually from the terminal.


This is by design. Using a file manager in dom0 is a security risk 
and is therefore discouraged:


https://github.com/Qubes-Community/Contents/blob/master/docs/security/security-guidelines.md#dom0-precautions 



So is there an alternative that gets the user script registered for 
saving a screenshot?




I'm not sure exactly what you mean, but there's:


I mean: It seems you need the file manager to open the file just to 
register it as handler; is there an alternative not using the file manager?




https://github.com/QubesOS/qubes-issues/issues/953


Unfortunately the issue is quite long, and you are not referring to a 
specific comment...






--
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/e8b2a8fe-d908-6c88-5eeb-a6f7c33564b7%40rz.uni-regensburg.de.


Re: [qubes-users] How do you think about the clipboard inter-VMs

2021-01-06 Thread Steve Coleman
On Tue, Jan 5, 2021, 11:04 PM pillule  wrote:

>
> Hello,
>
> I wonder how do you manage your computing life with the problem of
> the clipboard / file sharing.
>
>
>
> I guess most of us cheats theses rules sometimes ;
> if one deploys post-installation scripts in dom0,
> or takes notes in a vault and wants to copy in that URL,
> or maybe wants to take that snippet into that template ...
>
> I am curious to know how you think about it.
>

My take on it is to weigh the risk. For instance, I have a 'Purchasing' vm
and an Internet vm. I'll do all my searching of what I want to buy in the
Internet VM and then copy the specific URL over to the Purchasing VM,
rather than using the Purchasing vm to peruse the internet. I feel there is
much more likelihood of picking up malware by visiting random internet
sites than if I copy and paste a single url from a site that I have already
inspected its URL. I'll do the same kind of checks when moving receipts and
data from Purchasing to my Banking VM.

For the really paranoid you can create a dvm text editor, paste the
URL/text data there for inspection before finally copying it to the real
destination VM.

If the theoretical copy buffer attack is against Qubes itself I may still
be screwed, but that would have to be done by an adversary that both knows
what site I will be visiting and also know in advance that I use Qubes. We
are talking Nation State adversary,  who clearly already knows an awful lot
about me. At that level of the game its only a matter of time since clearly
I am a already a defined target of theirs. Pulling the plug would be the
only effective defence at that point.

So, weigh the risks and take precautions where possible. Always try to
double check what you are copying/moving across VM's and be appropriately
paranoid when moving data to a higher security domain.

>

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/CAJ5FDnh-TtuH%2BorAbAx-cPuP2wcDfvpdQJrwPU46%3DTH-%2BW0j%3DQ%40mail.gmail.com.


Re: [qubes-users] Re: High dom0 CPU usage by qubesd

2021-01-06 Thread Jarrah
This is some really nice tracing work. I'm sure it would be appreciated
as an issue in the qubes-issues repository so it can be tracked properly.

While I haven't gone to the same depth, I can confirm that `qubesd`
jumps to ~25% CPU regularly on my (albeit much beefier) system with i3.
This does correlate with qubes-i3status running on my system as well.


As a temporary work around, you could modify the script
(/usr/bin/qubes-i3status:123) to run every minute or longer. This would
have the downside of the clock updating slower, but otherwise should not
be a problem.

Alternatively, if the number of running VMs doesn't interest you, you
could comment out line 113 and modify 122 to suit this.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/47a92b74-22ad-276b-8c81-05dc9970dfe0%40undef.tools.


[qubes-users] Re: High dom0 CPU usage by qubesd

2021-01-06 Thread Vít Šesták
Hi,
I have some further info. I partially know cause and have a workaround.

There is my investigation. Some minor inaccuracies might be caused by 
retrospective writing:

1. I have tried to debug using strace. (Prerequisite: sudo 
qubes-dom0-install strace) After finding pid of qubesd, I ran:
sudo strace -s 256 -p PID_OF_QUBESD -o /tmp/qubesd.log

It looks like few seconds is enough to get a reasonable sample, see below.

2. I ran sort /tmp/qubesd.log | uniq -c | sort -n (one can also add “ -r | 
head -n 50”).

I have noticed an interesting line that repeats frequently:
sendto(270, "QubesNoSuchPropertyError\0", 25, 0, NULL, 0) = 25

3. Look closer:
$ grep --before=5 --after=5 QubesNoSuchPropertyError /tmp/qubesd.log 
The output contains many repeated occurrences of this, just with a 
different VM name. It seems to iterate over all the VMs (even those that 
are not running):
--
epoll_wait(3, [], , 0)= 0
getpid()= 
epoll_wait(3, [], , 0)= 0
epoll_wait(3, [], , 0)= 0
sendto(, "2\0", 2, 0, NULL, 0)   = 2
sendto(, "QubesNoSuchPropertyError\0", 25, 0, NULL, 0) = 25
sendto(, "\0", 1, 0, NULL, 0)= 1
sendto(, "Invalid property 'internal' of \0", 
38, 0, NULL, 0) = 38
shutdown(, SHUT_WR)  = 0
epoll_wait(3, [{EPOLLIN, {u32=, u64=}}], 18, 0) = 
1
close()  = 0

4. WTF, what would iterate over all the VMs? Maybe some script repeatedly 
runs qvm-ls? Let's ps aux | grep qvm-ls that! During increased CPU 
workload, I have identified:

qvm-ls --no-spinner --raw-data --fields NAME,FLAGS

5. During the current random CPU workload, I cannot reliably verify if it 
is the cause of the increased CPU usage, but at least I can verify if it is 
the cause of the error messages. So, I have tried the command while running 
this:

(sudo strace -s 256 -p PID_OF_QUBESD 2>&1) | grep 'Invalid property'

And yes, it seems to be the cause of the error messages and maybe also the 
source of increased CPU load.

6. Let's identify the script that runs the command: I ran htop, switched to 
tree mode (key t), waited for the qvm-ls (using watch + ps aux + grep) and 
typed “/qvm-ls”.

And the script to blame is – qubes-i3status

7. And yes, killing qubes-i3status has helped to decrease the CPU load. 
After doing that, I was able to confirm that qvm-ls --no-spinner --raw-data 
--fields NAME,FLAGS also causes the CPU load.


So, there are multiple causes combined:

* I have many VMs in my computer.
* I use i3 with qubes-i3status
* The script qubes-i3status calls command qvm-ls --no-spinner --raw-data 
--fields NAME,FLAGS quite frequently.
* The command qvm-ls --no-spinner --raw-data --fields NAME,FLAGS seems to 
cause high CPU load. Unfortunately, the process that shows the high CPU 
usage is qubesd, not qvm-ls.

What can be improved:

a. Don't use qubes-i3status. Problem solved.
b. Optimize qvm-ls. Not sure how hard it is.
c. Optimize qubes-i3status. I am not sure about the ideal way of doing 
that, but clearly running qvm-ls --no-spinner --raw-data --fields 
NAME,FLAGS just to compute the number of running qubes is far from optimal. 
One could add --running. And maybe it could have been written without 
flags. The script just ignores VMs with the first flag being “0” (maybe in 
order to ignore dom0) and the second flag being “r” (probably not needed 
with --running).

Regards,
Vít Šesták 'v6ak'
On Monday, January 4, 2021 at 11:51:23 PM UTC+1 Vít Šesták wrote:

> Hello,
> I have dual-core i7 7500U with disabled hyperthreading. In dom0, I often 
> have total CPU usage in tens of percents (often about 50 %, i.e., about 
> fully utilized single core). When I look at htop in dom0, it is clearly 
> caused by qubesd, which clearly uses the vast majority of CPU during these 
> peaks. Note that these peaks look rather random, I see no relation to any 
> activity. But they are quite frequent.
>
> When looking at the process tree, it has many child processes, probably 
> one for each domU qube. But they utilize near zero CPU.
>
> The column TIME+ confirms my CPU% observation in long term.
>
> I am not sure where to find any relevant log. Maybe journalctl, but I have 
> seen nothing suspicious there.
>
> Do you have any idea about the cause, solution or even a suggestion for 
> debugging?
>
> Regards,
> Vít Šesták 'v6ak'
>

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/9ac8afbe-55ee-485c-a29b-b7c75b92ecden%40googlegroups.com.