[qubes-users] Re: ok, things are happening...

2019-02-07 Thread Marcus Linsner
On Thursday, February 7, 2019 at 3:01:50 PM UTC, Marcus Linsner wrote:
> first:
> restarted, then couldn't launch any AppVMs due to:
> 
> [  638.747910] systemd[1]: Starting Qubes memory management daemon...
> [  638.923606] qmemmand[3984]: Traceback (most recent call last):
> [  638.923980] qmemmand[3984]:   File "/usr/bin/qmemmand", line 5, in 
> [  638.924219] qmemmand[3984]: sys.exit(main())
> [  638.924465] qmemmand[3984]:   File 
> "/usr/lib/python3.5/site-packages/qubes/tools/qmemmand.py", line 261, in main
> [  638.924697] qmemmand[3984]: 
> qubes.utils.parse_size(config.get('global', 'vm-min-mem'))
> [  638.924908] qmemmand[3984]:   File 
> "/usr/lib/python3.5/site-packages/qubes/utils.py", line 107, in parse_size
> [  638.925120] qmemmand[3984]: raise qubes.exc.QubesException("Invalid 
> size: {0}.".format(size))
> [  638.925331] qmemmand[3984]: qubes.exc.QubesException: Invalid size: 51MIB.
> [  638.941899] systemd[1]: qubes-qmemman.service: Main process exited, 
> code=exited, status=1/FAILURE
> [  638.942249] systemd[1]: Failed to start Qubes memory management daemon.
> [  638.942421] systemd[1]: qubes-qmemman.service: Unit entered failed state.
> [  638.942431] systemd[1]: qubes-qmemman.service: Failed with result 
> 'exit-code'.
> 
> 
> I don't remember updating /usr/lib/python3.5/site-packages/qubes/utils.py
> but size = size.strip().upper() in this context:
> 
> def parse_size(size):
> units = [ 
> ('K', 1000), ('KB', 1000), 
> ('M', 1000 * 1000), ('MB', 1000 * 1000), 
> ('G', 1000 * 1000 * 1000), ('GB', 1000 * 1000 * 1000),
> ('Ki', 1024), ('KiB', 1024),
> ('Mi', 1024 * 1024), ('MiB', 1024 * 1024),
>   
>
> ('Gi', 1024 * 1024 * 1024), ('GiB', 1024 * 1024 * 1024),
> ]
> 
> size = size.strip().upper()
> if size.isdigit():
> return int(size)
> 
> for unit, multiplier in units:
> if size.endswith(unit):
> size = size[:-len(unit)].strip()
> return int(size) * multiplier
> 
> raise qubes.exc.QubesException("Invalid size: {0}.".format(size))
> 
> I had to modify 'MiB' into 'MIB' up there.
> 
> $ cat /etc/qubes/qmemman.conf
> # The only section in this file
> [global]
> # vm-min-mem - give at least this amount of RAM for dynamically managed VM
> #  Default: 200M
> vm-min-mem = 51MiB
> 
> # dom0-mem-boost - additional memory given to dom0 for disk caches etc
> #  Default: 350M
> dom0-mem-boost = 95MiB
> 
> # cache-margin-factor - calculate VM preferred memory as (used 
> memory)*cache-margin-factor
> #  Default: 1.3
> cache-margin-factor = 1.3
> 
on AppVM shutdown, dmesg:
[ 1306.534953] qubesd[2075]: unhandled exception while calling src=b'dom0' 
meth=b'admin.vm.property.Get' dest=b'gmail-basedon-w-s-f-fdr28' 
arg=b'start_time' len(untrusted_payload)=0
[ 1306.535337] qubesd[2075]: Traceback (most recent call last):
[ 1306.535568] qubesd[2075]:   File 
"/usr/lib/python3.5/site-packages/qubes/__init__.py", line 225, in __get__
[ 1306.535797] qubesd[2075]: return getattr(instance, self._attr_name)
[ 1306.536014] qubesd[2075]: AttributeError: 'AppVM' object has no attribute 
'_qubesprop_start_time'
[ 1306.536246] qubesd[2075]: During handling of the above exception, another 
exception occurred:
[ 1306.536463] qubesd[2075]: Traceback (most recent call last):
[ 1306.536678] qubesd[2075]:   File 
"/usr/lib/python3.5/site-packages/qubes/api/__init__.py", line 266, in respond
[ 1306.536898] qubesd[2075]: untrusted_payload=untrusted_payload)
[ 1306.537116] qubesd[2075]:   File "/usr/lib64/python3.5/asyncio/futures.py", 
line 381, in __iter__
[ 1306.537349] qubesd[2075]: yield self  # This tells Task to wait for 
completion.
[ 1306.537568] qubesd[2075]:   File "/usr/lib64/python3.5/asyncio/tasks.py", 
line 310, in _wakeup
[ 1306.537782] qubesd[2075]: future.result()
[ 1306.537993] qubesd[2075]:   File "/usr/lib64/python3.5/asyncio/futures.py", 
line 294, in result
[ 1306.538211] qubesd[2075]: raise self._exception
[ 1306.538467] qubesd[2075]:   File "/usr/lib64/python3.5/asyncio/tasks.py", 
line 240, in _step
[ 1306.538682] qubesd[2075]: result = coro.send(None)
[ 1306.538894] qubesd[2075]:   File 
"/usr/lib64/python3.5/asyncio/coroutines.py", line 210, in coro
[ 1306.539119] qubesd[2075]: res = func(*args, **kw)
[ 1306.539343] qubesd[2075]:   File 
"/usr/lib/python3.5/site-packages/qubes/api/admin.py", line 155, in 
vm_property_get
[ 1306.539564] qubesd[2075]: return self._property_get(self.dest)
[ 

[qubes-users] ok, things are happening...

2019-02-07 Thread Marcus Linsner
first:
restarted, then couldn't launch any AppVMs due to:

[  638.747910] systemd[1]: Starting Qubes memory management daemon...
[  638.923606] qmemmand[3984]: Traceback (most recent call last):
[  638.923980] qmemmand[3984]:   File "/usr/bin/qmemmand", line 5, in 
[  638.924219] qmemmand[3984]: sys.exit(main())
[  638.924465] qmemmand[3984]:   File 
"/usr/lib/python3.5/site-packages/qubes/tools/qmemmand.py", line 261, in main
[  638.924697] qmemmand[3984]: qubes.utils.parse_size(config.get('global', 
'vm-min-mem'))
[  638.924908] qmemmand[3984]:   File 
"/usr/lib/python3.5/site-packages/qubes/utils.py", line 107, in parse_size
[  638.925120] qmemmand[3984]: raise qubes.exc.QubesException("Invalid 
size: {0}.".format(size))
[  638.925331] qmemmand[3984]: qubes.exc.QubesException: Invalid size: 51MIB.
[  638.941899] systemd[1]: qubes-qmemman.service: Main process exited, 
code=exited, status=1/FAILURE
[  638.942249] systemd[1]: Failed to start Qubes memory management daemon.
[  638.942421] systemd[1]: qubes-qmemman.service: Unit entered failed state.
[  638.942431] systemd[1]: qubes-qmemman.service: Failed with result 
'exit-code'.


I don't remember updating /usr/lib/python3.5/site-packages/qubes/utils.py
but size = size.strip().upper() in this context:

def parse_size(size):
units = [ 
('K', 1000), ('KB', 1000), 
('M', 1000 * 1000), ('MB', 1000 * 1000), 
('G', 1000 * 1000 * 1000), ('GB', 1000 * 1000 * 1000),
('Ki', 1024), ('KiB', 1024),
('Mi', 1024 * 1024), ('MiB', 1024 * 1024),  

   
('Gi', 1024 * 1024 * 1024), ('GiB', 1024 * 1024 * 1024),
]

size = size.strip().upper()
if size.isdigit():
return int(size)

for unit, multiplier in units:
if size.endswith(unit):
size = size[:-len(unit)].strip()
return int(size) * multiplier

raise qubes.exc.QubesException("Invalid size: {0}.".format(size))

I had to modify 'MiB' into 'MIB' up there.

$ cat /etc/qubes/qmemman.conf
# The only section in this file
[global]
# vm-min-mem - give at least this amount of RAM for dynamically managed VM
#  Default: 200M
vm-min-mem = 51MiB

# dom0-mem-boost - additional memory given to dom0 for disk caches etc
#  Default: 350M
dom0-mem-boost = 95MiB

# cache-margin-factor - calculate VM preferred memory as (used 
memory)*cache-margin-factor
#  Default: 1.3
cache-margin-factor = 1.3


Then second:
can't start my git AppVM due to:

[  754.926303] libvirtd[1800]: 2019-02-07 14:54:09.985+: 1831: error : 
libxlDomainStart:1308 : internal error: libxenlight failed to create new domain 
'gitsites-baseon-w-s-f-fdr28'
[  754.927495] qubesd[2075]: Start failed: internal error: libxenlight failed 
to create new domain 'gitsites-baseon-w-s-f-fdr28'

which in fairness, might be due to my newly compiled kernel 4.20.7 ...
but hey, this gmail AppVM works... so I don't even know anymore :D

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/267e04ff-1eb4-4ba6-905a-fc88b644e46a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: How to set some DNS entries in /etc/hosts for some hosts?

2019-02-07 Thread Marcus Linsner
On Thursday, February 7, 2019 at 2:17:19 PM UTC, Marcus Linsner wrote:
> On Thursday, February 7, 2019 at 1:44:00 PM UTC, Marcus Linsner wrote:
> > On Thursday, February 7, 2019 at 1:04:07 PM UTC, Marcus Linsner wrote:
> > > On Thursday, February 7, 2019 at 12:57:39 PM UTC, Marcus Linsner wrote:
> > > > Sometimes github.com resolves to 192.30.253.112 and .113 and today(at 
> > > > least) they don't allow port 22 ssh, so `git push` fails like
> > > > ssh: connect to host github.com port 22: No route to host
> > > > 
> > > > I noticed however that when it resolves to something like 140.82.112.40 
> > > > (unsure exactly the IP) then ssh works and `git push` succeeds!
> > > 
> > > the working IP is 140.82.118.3
> > 
> > grreat, now not even that IP works anymore:
> > ssh: connect to host github.com port 22: No route to host
> > 
> > i'm guessing some epic sshd bug is being exploited? :D silly speculation(s)
> 
> ok, it's because of Qubes because having a rule in Firewall like "github.com" 
> "ssh" "tcp" which apparently adds an iptables(?) rule based on resolved IP at 
> the time(of AppVM start?), and github having changing IPs ("We do not 
> recommend whitelisting by IP address," from: 
> https://help.github.com/articles/about-github-s-ip-addresses/ )
> 
> so basically, it was my fault :)


oh and I forgot to mention that because ping always works even if everything 
else is denied(in AppVM's Firewall tab), it threw me off :) it's a Qubes 
feature, I know.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/deddad62-b99d-45eb-9df2-317f4cf35bdc%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: How to set some DNS entries in /etc/hosts for some hosts?

2019-02-07 Thread Marcus Linsner
On Thursday, February 7, 2019 at 1:44:00 PM UTC, Marcus Linsner wrote:
> On Thursday, February 7, 2019 at 1:04:07 PM UTC, Marcus Linsner wrote:
> > On Thursday, February 7, 2019 at 12:57:39 PM UTC, Marcus Linsner wrote:
> > > Sometimes github.com resolves to 192.30.253.112 and .113 and today(at 
> > > least) they don't allow port 22 ssh, so `git push` fails like
> > > ssh: connect to host github.com port 22: No route to host
> > > 
> > > I noticed however that when it resolves to something like 140.82.112.40 
> > > (unsure exactly the IP) then ssh works and `git push` succeeds!
> > 
> > the working IP is 140.82.118.3
> 
> grreat, now not even that IP works anymore:
> ssh: connect to host github.com port 22: No route to host
> 
> i'm guessing some epic sshd bug is being exploited? :D silly speculation(s)

ok, it's because of Qubes because having a rule in Firewall like "github.com" 
"ssh" "tcp" which apparently adds an iptables(?) rule based on resolved IP at 
the time(of AppVM start?), and github having changing IPs ("We do not recommend 
whitelisting by IP address," from: 
https://help.github.com/articles/about-github-s-ip-addresses/ )

so basically, it was my fault :)

But still, I'd like to know an answer to my OP question: but I'm gonna guess 
I'll have to use dnsmasq instead of any kind of /etc/hosts, that is, for global 
effect.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/047dce15-fbf4-4cb7-88f6-b65d06c94506%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: How to set some DNS entries in /etc/hosts for some hosts?

2019-02-07 Thread Marcus Linsner
On Thursday, February 7, 2019 at 1:04:07 PM UTC, Marcus Linsner wrote:
> On Thursday, February 7, 2019 at 12:57:39 PM UTC, Marcus Linsner wrote:
> > Sometimes github.com resolves to 192.30.253.112 and .113 and today(at 
> > least) they don't allow port 22 ssh, so `git push` fails like
> > ssh: connect to host github.com port 22: No route to host
> > 
> > I noticed however that when it resolves to something like 140.82.112.40 
> > (unsure exactly the IP) then ssh works and `git push` succeeds!
> 
> the working IP is 140.82.118.3

grreat, now not even that IP works anymore:
ssh: connect to host github.com port 22: No route to host

i'm guessing some epic sshd bug is being exploited? :D silly speculation(s)

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/b76d8eaa-1aa2-4c18-af8c-65e74960386c%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: How to set some DNS entries in /etc/hosts for some hosts?

2019-02-07 Thread Marcus Linsner
On Thursday, February 7, 2019 at 12:57:39 PM UTC, Marcus Linsner wrote:
> Sometimes github.com resolves to 192.30.253.112 and .113 and today(at least) 
> they don't allow port 22 ssh, so `git push` fails like
> ssh: connect to host github.com port 22: No route to host
> 
> I noticed however that when it resolves to something like 140.82.112.40 
> (unsure exactly the IP) then ssh works and `git push` succeeds!

the working IP is 140.82.118.3

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/12ba1cc4-24ca-4145-9323-ae28a436c5dd%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] How to set some DNS entries in /etc/hosts for some hosts?

2019-02-07 Thread Marcus Linsner
Sometimes github.com resolves to 192.30.253.112 and .113 and today(at least) 
they don't allow port 22 ssh, so `git push` fails like
ssh: connect to host github.com port 22: No route to host

I noticed however that when it resolves to something like 140.82.112.40 (unsure 
exactly the IP) then ssh works and `git push` succeeds!

Valid github IPs can be seen here: https://api.github.com/meta

https://www.githubstatus.com/ currently reports all systems operational.

So, what ami2do? :)

I would need some global way to make sure github.com resolves to the working 
IP, but unsure how to make this work.

Ideally this would be in sys-net's /etc/hosts
but this of course doesn't have any effect: github.com still resolves to either 
of those .112 and .113 IPs. It only works if I put it in the current AppVM's 
/etc/hosts, of course.

How can this be done globally?

(Ideally, I would like to even bypass DNS completely, eventually, and only use 
/etc/hosts (kept up to date manually) but not in this post/thread.)

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/877926d2-5668-4ff5-9864-7eba8e97ca18%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: "Sensors plugin" in dom0 generates 2 "audit:" dmesg messages on every temperature refresh

2019-02-06 Thread Marcus Linsner
On Wednesday, February 6, 2019 at 6:49:06 PM UTC, Marcus Linsner wrote:
> On Wednesday, February 6, 2019 at 5:25:23 PM UTC, Marcus Linsner wrote:
> > On Wednesday, February 6, 2019 at 5:59:56 PM UTC+1, Marcus Linsner wrote:
> > > On Wednesday, August 22, 2018 at 11:08:44 PM UTC+2, Marcus Linsner wrote:
> > > > "Sensors plugin" is an xfce4-panel plugin which shows the CPU(and SSD) 
> > > > temperatures in the panel. (eg. RMB on panel, Panel->Add New 
> > > > Items...->Search: ->Sensor plugin)
> > > > 
> > > > Its default refresh is 60 seconds. I've set it to 5. But I want it on 1 
> > > > second, however this means it would generate 2 dmesg audit messages 
> > > > every second AND they are flushed to the disk(judging by the case HDD 
> > > > led flashing).
> > > > 
> > > > [   93.223814] audit: type=1100 audit(1534971421.712:183): pid=3748 
> > > > uid=1000 auid=1000 ses=2 msg='op=PAM:authentication 
> > > > grantors=pam_localuser acct="root" exe="/usr/sbin/userhelper" 
> > > > hostname=? addr=? terminal=? res=success'
> > > > [   93.223828] audit: type=1101 audit(1534971421.712:184): pid=3748 
> > > > uid=1000 auid=1000 ses=2 msg='op=PAM:accounting grantors=pam_permit 
> > > > acct="root" exe="/usr/sbin/userhelper" hostname=? addr=? terminal=? 
> > > > res=success'
> > > > 
> > > > 
> > > > Is there some way to turn these off? if not all the audit messages.
> > > 
> > > audit=0 in /proc/cmdline did it
> > > that is, for me, 
> > > sudo vim /boot/efi/EFI/qubes/xen.cfg
> > > and add it at the end of lines like:
> > > 
> > > kernel=vmlinuz-4.19.12-3.pvops.qubes.x86_64 
> > > root=/dev/mapper/qubes_dom0-root 
> > > rd.luks.uuid=luks-9ed952b5-2aa8-4564-b700-fb23f5c9e94b 
> > > rd.lvm.lv=qubes_dom0/root i915.alpha_support=1 rd.luks.options=discard 
> > > root_trim=yes rd.luks.allow-discards ipv6.disable=1 loglevel=15 
> > > log_buf_len=16M printk.always_kmsg_dump=y printk.time=y printk.devkmsg=on 
> > > mminit_loglevel=0 memory_corruption_check=1 fbcon=scrollback:4096k 
> > > fbcon=font:ProFont6x11 net.ifnames=1 pax_sanitize_slab=full console=tty1 
> > > earlyprintk=vga systemd.log_target=kmsg 
> > > systemd.journald.forward_to_console=1 udev.children-max=1256 
> > > rd.udev.children-max=1256 rhgb sysrq_always_enabled random.trust_cpu=off 
> > > audit=0
> > > 
> > > however now I've:
> > > [11487.420448] userhelper[9870]: running '/usr/sbin/hddtemp -n -q 
> > > /dev/sda' with root privileges on behalf of 'ctor'
> > > as a spam, every second.
> > > 
> > > I've noticed that /usr/sbin/hddtemp wasn't already suid root, so I've set 
> > > it now via:
> > > sudo chmod u+s /usr/sbin/hddtemp
> > > 
> > > the spam still happens, but maybe a reboot is in order, unless 
> > > xfce4-sensors-plugin is calling userhelper itself? 
> > > 
> > > [ctor@dom0 ~]$ rpm -qf `which userhelper`
> > > usermode-1.111-8.fc24.x86_64
> > > 
> > > I'll go reboot, if it works I won't post again, otherwise I'll keep 
> > > trying to find a way to get rid of this spam.
> > 
> > suid+reboot didn't work, but looks like I've encountered this before here: 
> > https://groups.google.com/d/msg/qubes-devel/NfVQi0HXWEY/uiw23yq2CgAJ
> > and it is a loglevel 15 message
> > <15>[   87.005717] userhelper[4027]: running '/usr/sbin/hddtemp -n -q 
> > /dev/sda' with root privileges on behalf of 'ctor'
> > 
> > so, in the worst case all I have to do is find out how to tell 
> > systemd/journald to not store it, which frankly I've no idea how, since it 
> > only accepts 0-7 numbers according to man journald.conf for MaxLevelStore= 
> > and yet that level 15 message still lands in journalctl -b 0
> > but perhaps other forwarding settings are in effect which make it so.
> > 
> > MaxLevelStore=, MaxLevelSyslog=, MaxLevelKMsg=, MaxLevelConsole=, 
> > MaxLevelWall=
> >Controls the maximum log level of messages that are stored on 
> > disk, forwarded to syslog, kmsg, the console or wall (if that is enabled, 
> > see above).
> >As argument, takes one of "emerg", "alert", "crit", "err", 
> > "warning", "notice", "info", "debug", or integer values in the range of 0-7
> >(corresponding to th

[qubes-users] Re: "Sensors plugin" in dom0 generates 2 "audit:" dmesg messages on every temperature refresh

2019-02-06 Thread Marcus Linsner
On Wednesday, February 6, 2019 at 5:25:23 PM UTC, Marcus Linsner wrote:
> On Wednesday, February 6, 2019 at 5:59:56 PM UTC+1, Marcus Linsner wrote:
> > On Wednesday, August 22, 2018 at 11:08:44 PM UTC+2, Marcus Linsner wrote:
> > > "Sensors plugin" is an xfce4-panel plugin which shows the CPU(and SSD) 
> > > temperatures in the panel. (eg. RMB on panel, Panel->Add New 
> > > Items...->Search: ->Sensor plugin)
> > > 
> > > Its default refresh is 60 seconds. I've set it to 5. But I want it on 1 
> > > second, however this means it would generate 2 dmesg audit messages every 
> > > second AND they are flushed to the disk(judging by the case HDD led 
> > > flashing).
> > > 
> > > [   93.223814] audit: type=1100 audit(1534971421.712:183): pid=3748 
> > > uid=1000 auid=1000 ses=2 msg='op=PAM:authentication 
> > > grantors=pam_localuser acct="root" exe="/usr/sbin/userhelper" hostname=? 
> > > addr=? terminal=? res=success'
> > > [   93.223828] audit: type=1101 audit(1534971421.712:184): pid=3748 
> > > uid=1000 auid=1000 ses=2 msg='op=PAM:accounting grantors=pam_permit 
> > > acct="root" exe="/usr/sbin/userhelper" hostname=? addr=? terminal=? 
> > > res=success'
> > > 
> > > 
> > > Is there some way to turn these off? if not all the audit messages.
> > 
> > audit=0 in /proc/cmdline did it
> > that is, for me, 
> > sudo vim /boot/efi/EFI/qubes/xen.cfg
> > and add it at the end of lines like:
> > 
> > kernel=vmlinuz-4.19.12-3.pvops.qubes.x86_64 
> > root=/dev/mapper/qubes_dom0-root 
> > rd.luks.uuid=luks-9ed952b5-2aa8-4564-b700-fb23f5c9e94b 
> > rd.lvm.lv=qubes_dom0/root i915.alpha_support=1 rd.luks.options=discard 
> > root_trim=yes rd.luks.allow-discards ipv6.disable=1 loglevel=15 
> > log_buf_len=16M printk.always_kmsg_dump=y printk.time=y printk.devkmsg=on 
> > mminit_loglevel=0 memory_corruption_check=1 fbcon=scrollback:4096k 
> > fbcon=font:ProFont6x11 net.ifnames=1 pax_sanitize_slab=full console=tty1 
> > earlyprintk=vga systemd.log_target=kmsg 
> > systemd.journald.forward_to_console=1 udev.children-max=1256 
> > rd.udev.children-max=1256 rhgb sysrq_always_enabled random.trust_cpu=off 
> > audit=0
> > 
> > however now I've:
> > [11487.420448] userhelper[9870]: running '/usr/sbin/hddtemp -n -q /dev/sda' 
> > with root privileges on behalf of 'ctor'
> > as a spam, every second.
> > 
> > I've noticed that /usr/sbin/hddtemp wasn't already suid root, so I've set 
> > it now via:
> > sudo chmod u+s /usr/sbin/hddtemp
> > 
> > the spam still happens, but maybe a reboot is in order, unless 
> > xfce4-sensors-plugin is calling userhelper itself? 
> > 
> > [ctor@dom0 ~]$ rpm -qf `which userhelper`
> > usermode-1.111-8.fc24.x86_64
> > 
> > I'll go reboot, if it works I won't post again, otherwise I'll keep trying 
> > to find a way to get rid of this spam.
> 
> suid+reboot didn't work, but looks like I've encountered this before here: 
> https://groups.google.com/d/msg/qubes-devel/NfVQi0HXWEY/uiw23yq2CgAJ
> and it is a loglevel 15 message
> <15>[   87.005717] userhelper[4027]: running '/usr/sbin/hddtemp -n -q 
> /dev/sda' with root privileges on behalf of 'ctor'
> 
> so, in the worst case all I have to do is find out how to tell 
> systemd/journald to not store it, which frankly I've no idea how, since it 
> only accepts 0-7 numbers according to man journald.conf for MaxLevelStore= 
> and yet that level 15 message still lands in journalctl -b 0
> but perhaps other forwarding settings are in effect which make it so.
> 
> MaxLevelStore=, MaxLevelSyslog=, MaxLevelKMsg=, MaxLevelConsole=, 
> MaxLevelWall=
>Controls the maximum log level of messages that are stored on 
> disk, forwarded to syslog, kmsg, the console or wall (if that is enabled, see 
> above).
>As argument, takes one of "emerg", "alert", "crit", "err", 
> "warning", "notice", "info", "debug", or integer values in the range of 0-7
>(corresponding to the same levels). Messages equal or below the 
> log level specified are stored/forwarded, messages above are dropped. 
> Defaults to
>"debug" for MaxLevelStore= and MaxLevelSyslog=, to ensure that the 
> all messages are written to disk and forwarded to syslog. Defaults to "notice"
>for MaxLevelKMsg=, "info" for MaxLevelConsole=, and "emerg" for 
> MaxLevelWall=.
> 
> So, since 'debug' is

[qubes-users] Re: "Sensors plugin" in dom0 generates 2 "audit:" dmesg messages on every temperature refresh

2019-02-06 Thread Marcus Linsner
On Wednesday, February 6, 2019 at 5:59:56 PM UTC+1, Marcus Linsner wrote:
> On Wednesday, August 22, 2018 at 11:08:44 PM UTC+2, Marcus Linsner wrote:
> > "Sensors plugin" is an xfce4-panel plugin which shows the CPU(and SSD) 
> > temperatures in the panel. (eg. RMB on panel, Panel->Add New 
> > Items...->Search: ->Sensor plugin)
> > 
> > Its default refresh is 60 seconds. I've set it to 5. But I want it on 1 
> > second, however this means it would generate 2 dmesg audit messages every 
> > second AND they are flushed to the disk(judging by the case HDD led 
> > flashing).
> > 
> > [   93.223814] audit: type=1100 audit(1534971421.712:183): pid=3748 
> > uid=1000 auid=1000 ses=2 msg='op=PAM:authentication grantors=pam_localuser 
> > acct="root" exe="/usr/sbin/userhelper" hostname=? addr=? terminal=? 
> > res=success'
> > [   93.223828] audit: type=1101 audit(1534971421.712:184): pid=3748 
> > uid=1000 auid=1000 ses=2 msg='op=PAM:accounting grantors=pam_permit 
> > acct="root" exe="/usr/sbin/userhelper" hostname=? addr=? terminal=? 
> > res=success'
> > 
> > 
> > Is there some way to turn these off? if not all the audit messages.
> 
> audit=0 in /proc/cmdline did it
> that is, for me, 
> sudo vim /boot/efi/EFI/qubes/xen.cfg
> and add it at the end of lines like:
> 
> kernel=vmlinuz-4.19.12-3.pvops.qubes.x86_64 root=/dev/mapper/qubes_dom0-root 
> rd.luks.uuid=luks-9ed952b5-2aa8-4564-b700-fb23f5c9e94b 
> rd.lvm.lv=qubes_dom0/root i915.alpha_support=1 rd.luks.options=discard 
> root_trim=yes rd.luks.allow-discards ipv6.disable=1 loglevel=15 
> log_buf_len=16M printk.always_kmsg_dump=y printk.time=y printk.devkmsg=on 
> mminit_loglevel=0 memory_corruption_check=1 fbcon=scrollback:4096k 
> fbcon=font:ProFont6x11 net.ifnames=1 pax_sanitize_slab=full console=tty1 
> earlyprintk=vga systemd.log_target=kmsg systemd.journald.forward_to_console=1 
> udev.children-max=1256 rd.udev.children-max=1256 rhgb sysrq_always_enabled 
> random.trust_cpu=off audit=0
> 
> however now I've:
> [11487.420448] userhelper[9870]: running '/usr/sbin/hddtemp -n -q /dev/sda' 
> with root privileges on behalf of 'ctor'
> as a spam, every second.
> 
> I've noticed that /usr/sbin/hddtemp wasn't already suid root, so I've set it 
> now via:
> sudo chmod u+s /usr/sbin/hddtemp
> 
> the spam still happens, but maybe a reboot is in order, unless 
> xfce4-sensors-plugin is calling userhelper itself? 
> 
> [ctor@dom0 ~]$ rpm -qf `which userhelper`
> usermode-1.111-8.fc24.x86_64
> 
> I'll go reboot, if it works I won't post again, otherwise I'll keep trying to 
> find a way to get rid of this spam.

suid+reboot didn't work, but looks like I've encountered this before here: 
https://groups.google.com/d/msg/qubes-devel/NfVQi0HXWEY/uiw23yq2CgAJ
and it is a loglevel 15 message
<15>[   87.005717] userhelper[4027]: running '/usr/sbin/hddtemp -n -q /dev/sda' 
with root privileges on behalf of 'ctor'

so, in the worst case all I have to do is find out how to tell systemd/journald 
to not store it, which frankly I've no idea how, since it only accepts 0-7 
numbers according to man journald.conf for MaxLevelStore= 
and yet that level 15 message still lands in journalctl -b 0
but perhaps other forwarding settings are in effect which make it so.

MaxLevelStore=, MaxLevelSyslog=, MaxLevelKMsg=, MaxLevelConsole=, MaxLevelWall=
   Controls the maximum log level of messages that are stored on disk, 
forwarded to syslog, kmsg, the console or wall (if that is enabled, see above).
   As argument, takes one of "emerg", "alert", "crit", "err", 
"warning", "notice", "info", "debug", or integer values in the range of 0-7
   (corresponding to the same levels). Messages equal or below the log 
level specified are stored/forwarded, messages above are dropped. Defaults to
   "debug" for MaxLevelStore= and MaxLevelSyslog=, to ensure that the 
all messages are written to disk and forwarded to syslog. Defaults to "notice"
   for MaxLevelKMsg=, "info" for MaxLevelConsole=, and "emerg" for 
MaxLevelWall=.

So, since 'debug' is 7, it stands to reason that a level 15 message won't be 
seen, unless ... I'm missing something.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/0a93d3ae-5811-4fd3-b69c-8bb10f1e9123%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: "Sensors plugin" in dom0 generates 2 "audit:" dmesg messages on every temperature refresh

2019-02-06 Thread Marcus Linsner
On Wednesday, August 22, 2018 at 11:08:44 PM UTC+2, Marcus Linsner wrote:
> "Sensors plugin" is an xfce4-panel plugin which shows the CPU(and SSD) 
> temperatures in the panel. (eg. RMB on panel, Panel->Add New 
> Items...->Search: ->Sensor plugin)
> 
> Its default refresh is 60 seconds. I've set it to 5. But I want it on 1 
> second, however this means it would generate 2 dmesg audit messages every 
> second AND they are flushed to the disk(judging by the case HDD led flashing).
> 
> [   93.223814] audit: type=1100 audit(1534971421.712:183): pid=3748 uid=1000 
> auid=1000 ses=2 msg='op=PAM:authentication grantors=pam_localuser acct="root" 
> exe="/usr/sbin/userhelper" hostname=? addr=? terminal=? res=success'
> [   93.223828] audit: type=1101 audit(1534971421.712:184): pid=3748 uid=1000 
> auid=1000 ses=2 msg='op=PAM:accounting grantors=pam_permit acct="root" 
> exe="/usr/sbin/userhelper" hostname=? addr=? terminal=? res=success'
> 
> 
> Is there some way to turn these off? if not all the audit messages.

audit=0 in /proc/cmdline did it
that is, for me, 
sudo vim /boot/efi/EFI/qubes/xen.cfg
and add it at the end of lines like:

kernel=vmlinuz-4.19.12-3.pvops.qubes.x86_64 root=/dev/mapper/qubes_dom0-root 
rd.luks.uuid=luks-9ed952b5-2aa8-4564-b700-fb23f5c9e94b 
rd.lvm.lv=qubes_dom0/root i915.alpha_support=1 rd.luks.options=discard 
root_trim=yes rd.luks.allow-discards ipv6.disable=1 loglevel=15 log_buf_len=16M 
printk.always_kmsg_dump=y printk.time=y printk.devkmsg=on mminit_loglevel=0 
memory_corruption_check=1 fbcon=scrollback:4096k fbcon=font:ProFont6x11 
net.ifnames=1 pax_sanitize_slab=full console=tty1 earlyprintk=vga 
systemd.log_target=kmsg systemd.journald.forward_to_console=1 
udev.children-max=1256 rd.udev.children-max=1256 rhgb sysrq_always_enabled 
random.trust_cpu=off audit=0

however now I've:
[11487.420448] userhelper[9870]: running '/usr/sbin/hddtemp -n -q /dev/sda' 
with root privileges on behalf of 'ctor'
as a spam, every second.

I've noticed that /usr/sbin/hddtemp wasn't already suid root, so I've set it 
now via:
sudo chmod u+s /usr/sbin/hddtemp

the spam still happens, but maybe a reboot is in order, unless 
xfce4-sensors-plugin is calling userhelper itself? 

[ctor@dom0 ~]$ rpm -qf `which userhelper`
usermode-1.111-8.fc24.x86_64

I'll go reboot, if it works I won't post again, otherwise I'll keep trying to 
find a way to get rid of this spam.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/53d079cf-2994-4e78-935b-e9398e51b174%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] Re: dom0 clock keeps resetting to wrong time again

2019-01-21 Thread Marcus Linsner
On Sunday, January 20, 2019 at 7:02:34 PM UTC+1, john s. wrote:
> On 1/20/19 12:06 AM, Andrew David Wong wrote:
> > On 19/01/2019 5.08 PM, John S.Recdep wrote:
> >> On 1/14/19 9:55 PM, John S.Recdep wrote:
> >>> On 1/14/19 10:10 AM, John S.Recdep wrote:
>  Hello,
> 
>  I believe my Bios time is UTC
> 
>  qubes-prefs shows my clockvm  as sys-net
> 
>  I have been trying to use sudo date --set <"localtime TZ">  to
>  get my dom0 correct.  Which it does but within 30-60  it is
>  changing to another TZ I don't recognize
> 
>  I have also tried  qvm-sync-clock  ,  and tried a qubes group
>  search as I remember fighting this out many times   with whonix
>  issues , however
> 
>  I am at a loss what to do further to problem-solve fix this  , 
>  appreciate your help
> 
> 
>  maybe I can switch the template for sys-net back to fedora-28
>  instead of -29  . ?
> 
> >>>
> >>> 1-14-19
> >>>
> >>> changing the clockvm to fed-28 and rerunning qvm-sync-clock did
> >>> nothing BUT*  changing sys-net to debian-9  and qvm-sync-clock
> >>> fixed it   sigh
> >>>
> >>> case anyone else gets this issue again ; prolly will fix
> >>> whonixcheck complaints if any as well
> >>>
> >>> 'solved'
> >>>
> > 
> >> Must be related to this 
> >> https://github.com/QubesOS/qubes-issues/issues/3983
> > 
> >> for myself it's not the  minimal fedora-29 its the regular one ;
> >> and it appears if fedora-29 is having time issues  that  that may
> >> be why my thunderbird appvm based on it,  is also  timestamping the
> >> messages wrong ?
> > 
> > 
> > Try the workaround mentioned in the comments on that issue, if you
> > haven't already. It's worked for me so far.
> > 
> > 
> 
> 
> well if you mean:
> 
> in the Fedora-29 template doing
> 
> sudo chmod 700 /var/lib/private
> 
> 
> [user@fedora-29 ~]$ sudo ls -l /var/lib/private/
> total 4
this looks inside the private dir, but what you want is to look at the dir 
itself, here's two ways:
[user@sys-firewall ~]$ stat /var/lib/private/
  File: /var/lib/private/
  Size: 4096Blocks: 8  IO Block: 4096   directory
Device: ca03h/51715dInode: 397353  Links: 3
Access: (0700/drwx--)  Uid: (0/root)   Gid: (0/root)
Access: 2018-05-24 01:13:40.23200 +0200
Modify: 2018-05-24 01:13:40.23200 +0200
Change: 2018-12-20 03:21:32.44700 +0100
 Birth: -

[user@sys-firewall ~]$ ls -la /var/lib/|grep private
drwx--  3 rootroot4096 May 24  2018 private


> drwxr-xr-x 2 root root 4096 Dec  9 15:37 systemd
> [user@fedora-29 ~]$ sudo chmod 700 /var/lib/private/
now that you've already run this, it should be fixed already :)
> [user@fedora-29 ~]$ sudo ls -l /var/lib/private/
> total 4
> drwxr-xr-x 2 root root 4096 Dec  9 15:37 systemd
> 
> 
> doesn't seem to change any permissions on the systemd  directory  so
> . doesn't seem like that is going to fix anything ?  maybe I'm
> missing something basic?

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/44e5d8af-9bf4-42c0-adea-cd90869af086%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] Re: Qubes 4.0: Device Widget disappeared

2018-11-02 Thread Marcus Linsner
On Thursday, November 1, 2018 at 11:21:08 AM UTC+1, peter.p...@gmail.com wrote:
> On Monday, October 15, 2018 at 11:17:48 AM UTC+2, fabr...@statebox.io wrote:
> > On Friday, October 12, 2018 at 7:13:56 AM UTC+2, awokd wrote:
> > > fabr...@statebox.io wrote on 10/11/18 9:40 AM:
> > > > On Friday, May 4, 2018 at 5:05:45 PM UTC+1, gaxi...@gmail.com wrote:
> > > >> The system tray device widget is not visible anymore. It was there 
> > > >> after a fresh 4.0 install but disappeared after some days. How can i 
> > > >> get it back?
> > > > 
> > > > Bad news!: After today's Dom0 update, I got exactly this problem. Trace 
> > > > is the same as reported above. I never encountered it before. Is there 
> > > > something I can do?
> > > 
> > > Looks like that's a recent bug that has already been fixed: 
> > > https://github.com/QubesOS/qubes-issues/issues/4386, so the device 
> > > widget should reappear after the next round of updates.
> > 
> > I updated from the qubes-dom0-current-testing repo but the problem is still 
> > there :(
> 
> I observed the following:
> 
> Starting a vm with 
> $ qvm-start --hddisk dom0:dataimage.img vm-name 
> makes the devices widget disappear.
> 
> But luckily
> $ python3 -mqui.tray.devices &
this should also work:
$ qui-devices &
> re-creates it again.
> 
> Starting a vm without an external HDD image leaves the widget untouched.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/f52183b6-6918-4cb6-89c5-d36f258f1c28%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: "Error: conflicting requests" while trying to install some kernel rpm package(s)

2018-10-23 Thread Marcus Linsner
On Wednesday, October 24, 2018 at 7:16:07 AM UTC+2, Marcus Linsner wrote:
> well, removing 4.19.0_rc5-5 allows installing 4.19.0-1 (or 4.19.0_rc5-5 again 
> but then can't install 4.19.0-1)
> no idea why this makes sense though, but it must be something I did in 
> kernel.spec so ignore this thread I guess, must be my bad (I say this also 
> because I cannot install kernel-latest-qubes-vm-4.19.0.pvops.qubes.x86_64.rpm 
> at all, even after removing all other kernel-latest-qubes-vm versions, but 
> can install kernel-latest-qubes-vm-4.19.0_rc7-1.pvops.qubes.x86_64.rpm)
> 
> [ctor@dom0 ~]$ sudo dnf remove 
> kernel-latest-1000:4.19.0_rc5-5.pvops.qubes.x86_64
> Dependencies resolved.
> ===
>  Package Arch Version 
>Repository 
>   Size
> ===
> Removing:
>  kernel-latest   x86_64   
> 1000:4.19.0_rc5-5.pvops.qubes  @@commandline  
>   64 M
> 
> Transaction Summary
> ===
> Remove  1 Package
> 
> Installed size: 64 M
> Is this ok [y/N]: y
> Running transaction check
> Transaction check succeeded.
> Running transaction test
> Transaction test succeeded.
> Running transaction
>   Erasing : kernel-latest-1000:4.19.0_rc5-5.pvops.qubes.x86_64
>   
>1/1 
>   Verifying   : kernel-latest-1000:4.19.0_rc5-5.pvops.qubes.x86_64
>   
>1/1 
> 
> Removed:
>   kernel-latest.x86_64 1000:4.19.0_rc5-5.pvops.qubes  
>   
>
> 
> Complete!
> 
> [ctor@dom0 ~]$ sudo dnf install kernel-latest-4.19.0-1.pvops.qubes.x86_64.rpm
> Qubes OS Repository for Dom0  
>   26 MB/s |  27 kB 
> 00:00
> Dependencies resolved.
> ===
>  Package   Arch   Version 
>   Repository  
>   Size
> ===
> Installing:
>  kernel-latest x86_64 
> 1000:4.19.0-1.pvops.qubes @commandline
>   14 M
> 
> Transaction Summary
> ===
> Install  1 Package
> 
> Total size: 14 M
> Installed size: 64 M
> Is this ok [y/N]: y
> Downloading Packages:
> Running transaction check
> Transaction check succeeded.
> Running transaction test
> Transaction test succeeded.
> Running transaction
>   Installing  : kernel-latest-1000:4.19.0-1.pvops.qubes.x86_64
>   
>1/1 
>   Verifying   : kernel-latest-1000:4.19.0-1.pvops.qubes.x86_64
>   
>1/1 
> 
> Installed:
>   kernel-latest.x86_64 1000:4.19.0-1.pvops.qubes  
>   
>
> 
> Complete!
> 
> [ctor@dom0 ~]$ dnf list kernel-latest-1000:
> Last metadata expiration check: 26 days, 17:26:13 ago on Thu Sep 27 13:28:26 
> 2018.
> Installed Packages
> kernel-latest.x86_64  
> 1000:4.19.0-1.pvops.qubes 
>   @@commandline
> kernel-latest.x86_64  
> 1000:4.19.0_rc5-4.pvops.qub

[qubes-users] Re: "Error: conflicting requests" while trying to install some kernel rpm package(s)

2018-10-23 Thread Marcus Linsner
 'kernel-latest-qubes-vm'
package kernel-latest-qubes-vm is not installed

[ctor@dom0 ~]$ sudo dnf install 
kernel-latest-qubes-vm-4.19.0_rc7-1.pvops.qubes.x86_64.rpm
Qubes OS Repository for Dom0
26 MB/s |  27 kB 00:00  
  
Dependencies resolved.
===
 PackageArch   Version  
Repository
Size
===
Installing:
 kernel-latest-qubes-vm x86_64 
1000:4.19.0_rc7-1.pvops.qubes@commandline   
   31 M

Transaction Summary
===
Install  1 Package

Total size: 31 M
Installed size: 98 M
Is this ok [y/N]: 


On Wednesday, October 24, 2018 at 5:49:37 AM UTC+2, Marcus Linsner wrote:
> Hi. Some packages in R4.0 dom0 got updated recently and now I fail to be able 
> to install kernel .rpm packages that I was previously able to install, like 
> so:
> 
> [ctor@dom0 ~]$ sudo dnf install --allowerasing -v 
> kernel-latest-4.19.0_rc5-1.pvops.qubes.x86_64.rpm 
> cachedir: /var/cache/dnf
> Loaded plugins: reposync, download, playground, copr, noroot, 
> debuginfo-install, needs-restarting, builddep, Query, 
> generate_completion_cache, protected_packages, config-manager
> DNF version: 1.1.10
> Qubes OS Repository for Dom0  
>  208 MB/s | 233 kB 
> 00:00
> not found deltainfo for: Qubes OS Repository for Dom0
> not found updateinfo for: Qubes OS Repository for Dom0
> qubes-dom0-cached: using metadata from Tue Oct 23 09:46:55 2018.
> Completion plugin: Generating completion cache...
> --> Starting dependency resolution
> --> Finished dependency resolution
> Error: conflicting requests
> 
> -rw-rw-r--   1 ctor ctor 14425450 Sep 26 22:51 
> kernel-latest-4.19.0_rc5-1.pvops.qubes.x86_64.rpm
> 
> I'm pretty sure I already had integrated the latest commits from 
> https://github.com/QubesOS/qubes-linux-kernel/commits/master
> in that package, and as I mentioned, this very package installed just fine 
> before.
> 
> Any ideas what might be going wrong? or hints on where to look?
> 
> 
> 
> here's from /var/log/dnf.log:
> 
> Oct 24 05:38:22 INFO --- logging initialized ---
> Oct 24 05:38:22 DDEBUG timer: config: 4 ms
> Oct 24 05:38:22 DEBUG cachedir: /var/cache/dnf
> Oct 24 05:38:22 DEBUG Loaded plugins: Query, noroot, playground, 
> debuginfo-install, generate_completion_cache, builddep, reposync, 
> needs-restarting, config-manager
> , download, copr, protected_packages
> Oct 24 05:38:22 DEBUG DNF version: 1.1.10
> Oct 24 05:38:22 DDEBUG Command: dnf -q clean all 
> Oct 24 05:38:22 DDEBUG Installroot: /
> Oct 24 05:38:22 DDEBUG Releasever: 4.0
> Oct 24 05:38:22 DDEBUG Base command: clean
> Oct 24 05:38:22 DDEBUG Extra commands: ['all']
> Oct 24 05:38:22 DEBUG Cleaning data: dbcache metadata packages
> Oct 24 05:38:22 DDEBUG Removing file /var/cache/dnf/qubes-dom0-cached.solv
> Oct 24 05:38:22 DDEBUG Removing file 
> /var/cache/dnf/qubes-dom0-cached-filenames.solvx
> Oct 24 05:38:22 DDEBUG Removing file /var/cache/dnf/@System.solv
> Oct 24 05:38:22 DDEBUG Removing file 
> /var/cache/dnf/qubes-dom0-cached-0a4969fcb2af0ab4/repodata/repomd.xml
> Oct 24 05:38:22 DDEBUG Removing file 
> /var/cache/dnf/qubes-dom0-cached-0a4969fcb2af0ab4/repodata/169d927ebaee0282378273bd8238824671876deef42df90fb8679aa7e03fd7bd-filelists.xml.gz
> Oct 24 05:38:22 DDEBUG Removing file 
> /var/cache/dnf/qubes-dom0-cached-0a4969fcb2af0ab4/repodata/848685ae636fd0dcb1cb1dbfa0af56d5494bd7804068cb88f5649d8390deae8c-Qubes-comps.xml.gz
> Oct 24 05:38:22 DDEBUG Removing file 
> /var/cache/dnf/qubes-dom0-cached-0a4969fcb2af0ab4/repodata/a20893b8b4ca012a594c955549528d8f2fece0e8dba58047963cbf9f28fd15af-primary.xml.gz
> Oct 24 05:38:22 INFO 7 files removed
> Oct 24 05:38:22 DDEBUG Cleaning up.
> Oct 24 05:38:22 INFO --- logging initialized ---
> Oct 24 05:38:22 DDEBUG timer: config: 3 ms
> Oct 24 05:38:22 DEBUG cachedir: /var/cache/dnf
> Oct 24 05:38:22 DEBUG Loaded plugins: builddep, copr, config-manager, noroot, 
> reposync, generate_completion_cache, Query, protected_packages, 
> ne

[qubes-users] "Error: conflicting requests" while trying to install some kernel rpm package(s)

2018-10-23 Thread Marcus Linsner
Hi. Some packages in R4.0 dom0 got updated recently and now I fail to be able 
to install kernel .rpm packages that I was previously able to install, like so:

[ctor@dom0 ~]$ sudo dnf install --allowerasing -v 
kernel-latest-4.19.0_rc5-1.pvops.qubes.x86_64.rpm 
cachedir: /var/cache/dnf
Loaded plugins: reposync, download, playground, copr, noroot, 
debuginfo-install, needs-restarting, builddep, Query, 
generate_completion_cache, protected_packages, config-manager
DNF version: 1.1.10
Qubes OS Repository for Dom0
   208 MB/s | 233 kB 00:00  
  
not found deltainfo for: Qubes OS Repository for Dom0
not found updateinfo for: Qubes OS Repository for Dom0
qubes-dom0-cached: using metadata from Tue Oct 23 09:46:55 2018.
Completion plugin: Generating completion cache...
--> Starting dependency resolution
--> Finished dependency resolution
Error: conflicting requests

-rw-rw-r--   1 ctor ctor 14425450 Sep 26 22:51 
kernel-latest-4.19.0_rc5-1.pvops.qubes.x86_64.rpm

I'm pretty sure I already had integrated the latest commits from 
https://github.com/QubesOS/qubes-linux-kernel/commits/master
in that package, and as I mentioned, this very package installed just fine 
before.

Any ideas what might be going wrong? or hints on where to look?



here's from /var/log/dnf.log:

Oct 24 05:38:22 INFO --- logging initialized ---
Oct 24 05:38:22 DDEBUG timer: config: 4 ms
Oct 24 05:38:22 DEBUG cachedir: /var/cache/dnf
Oct 24 05:38:22 DEBUG Loaded plugins: Query, noroot, playground, 
debuginfo-install, generate_completion_cache, builddep, reposync, 
needs-restarting, config-manager
, download, copr, protected_packages
Oct 24 05:38:22 DEBUG DNF version: 1.1.10
Oct 24 05:38:22 DDEBUG Command: dnf -q clean all 
Oct 24 05:38:22 DDEBUG Installroot: /
Oct 24 05:38:22 DDEBUG Releasever: 4.0
Oct 24 05:38:22 DDEBUG Base command: clean
Oct 24 05:38:22 DDEBUG Extra commands: ['all']
Oct 24 05:38:22 DEBUG Cleaning data: dbcache metadata packages
Oct 24 05:38:22 DDEBUG Removing file /var/cache/dnf/qubes-dom0-cached.solv
Oct 24 05:38:22 DDEBUG Removing file 
/var/cache/dnf/qubes-dom0-cached-filenames.solvx
Oct 24 05:38:22 DDEBUG Removing file /var/cache/dnf/@System.solv
Oct 24 05:38:22 DDEBUG Removing file 
/var/cache/dnf/qubes-dom0-cached-0a4969fcb2af0ab4/repodata/repomd.xml
Oct 24 05:38:22 DDEBUG Removing file 
/var/cache/dnf/qubes-dom0-cached-0a4969fcb2af0ab4/repodata/169d927ebaee0282378273bd8238824671876deef42df90fb8679aa7e03fd7bd-filelists.xml.gz
Oct 24 05:38:22 DDEBUG Removing file 
/var/cache/dnf/qubes-dom0-cached-0a4969fcb2af0ab4/repodata/848685ae636fd0dcb1cb1dbfa0af56d5494bd7804068cb88f5649d8390deae8c-Qubes-comps.xml.gz
Oct 24 05:38:22 DDEBUG Removing file 
/var/cache/dnf/qubes-dom0-cached-0a4969fcb2af0ab4/repodata/a20893b8b4ca012a594c955549528d8f2fece0e8dba58047963cbf9f28fd15af-primary.xml.gz
Oct 24 05:38:22 INFO 7 files removed
Oct 24 05:38:22 DDEBUG Cleaning up.
Oct 24 05:38:22 INFO --- logging initialized ---
Oct 24 05:38:22 DDEBUG timer: config: 3 ms
Oct 24 05:38:22 DEBUG cachedir: /var/cache/dnf
Oct 24 05:38:22 DEBUG Loaded plugins: builddep, copr, config-manager, noroot, 
reposync, generate_completion_cache, Query, protected_packages, 
needs-restarting, download, playground, debuginfo-install
Oct 24 05:38:22 DEBUG DNF version: 1.1.10
Oct 24 05:38:22 DDEBUG Command: dnf check-update 
Oct 24 05:38:22 DDEBUG Installroot: /
Oct 24 05:38:22 DDEBUG Releasever: 4.0
Oct 24 05:38:22 DDEBUG Base command: check-update
Oct 24 05:38:22 DDEBUG Extra commands: []
Oct 24 05:38:22 DDEBUG repo: downloading from remote: qubes-dom0-cached, 
_Handle: metalnk: None, mlist: None, urls ['file:///var/lib/qubes/updates'].
Oct 24 05:38:22 DEBUG not found deltainfo for: Qubes OS Repository for Dom0
Oct 24 05:38:22 DEBUG not found updateinfo for: Qubes OS Repository for Dom0
Oct 24 05:38:22 DDEBUG timer: sack setup: 221 ms
Oct 24 05:38:22 DEBUG qubes-dom0-cached: using metadata from Wed Oct 24 
05:38:22 2018.
Oct 24 05:38:22 DEBUG Completion plugin: Generating completion cache...
Oct 24 05:38:23 DDEBUG Cleaning up.
Oct 24 05:38:23 INFO --- logging initialized ---
Oct 24 05:38:23 DDEBUG timer: config: 3 ms
Oct 24 05:38:23 DEBUG cachedir: /var/cache/dnf
Oct 24 05:38:23 DEBUG Loaded plugins: noroot, generate_completion_cache, 
reposync, config-manager, debuginfo-install, builddep, Query, download, 
playground, copr, protected_packages, needs-restarting
Oct 24 05:38:23 DEBUG DNF version: 1.1.10
Oct 24 05:38:23 DDEBUG Command: dnf -q check-update 
Oct 24 05:38:23 DDEBUG Installroot: /
Oct 24 05:38:23 DDEBUG Releasever: 4.0
Oct 24 05:38:23 DDEBUG Base command: check-update
Oct 24 05:38:23 DDEBUG Extra commands: []
Oct 24 05:38:23 DDEBUG repo: downloading from remote: qubes-dom0-cached, 
_Handle: metalnk: None, mlist: None, urls ['file:///var/lib/qubes/updates'].
Oct 24 05:38:23 DEBUG not found deltainfo for: Qubes OS Repository for Dom0
Oct 24 05:38:23 DEBUG 

[qubes-users] Re: R4.0 with fedora 28 template sys-net fails to sync time

2018-09-15 Thread Marcus Linsner
On Saturday, September 15, 2018 at 9:45:34 PM UTC+2, Alex wrote:
> Hi all,
> I happen to have run into the problem as per the subject. What happened
> is this:
> 
> * I recently installed a fully clean R4.0 system, with default templates
> and sys-* qubes (this means fedora 26)
> * I upgraded the default template, after cloning it, to fedora 28
> * This means that now I have a fedora-28 based sys-net
> * The system fails to sync the time to NTP servers
> 
> What I debugged until now:
> * in sys-net, the service systemd-timesyncd should start and update the
> time - it's enabled by default
> * it does not, because it fails to start due to some inaccessible
> directory that is not detailed in the logs
> * googling around I found that it looks like one of the usual
> surprise-ridden features of systemd, namely DynamicUser, that seems to
> have problems with FUSE mounts and the custom-namespace-based isolation
> (https://utcc.utoronto.ca/~cks/space/blog/linux/SystemdTimesyncdFailure?showcomments).
> I'm thinking this issue is manifesting itself with some of the Qubes
> infrastructure.
> 
> Does anybody have a recommended way of fixing this, that avoids just
> waiting for the systemd guys to fix this? I don't like the idea of
> editing systemd's "packaged" unit files, nor am I willing to go set
> weird permission / mount options for qubes' directory mounts. What I'd
> like to have is a way of having dom0's time set from a network (NTP)
> source without necessarily having to successfully set the time in my
> sys-net.
> 
> What I'm thinking of doing is having a separate clock vm, with a more
> standard ntpd, but I'm not sure of the network "position" inside qubes -
> will it be enough to give it "sys-net" as the network vm?
> 
> Thanks in advance for any guidance...
> 
> -- 
> Alex

Please see this[1] for a fix.

[1] https://github.com/QubesOS/qubes-issues/issues/3983

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/06b36685-5cad-4f75-889b-e5aff51d5041%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] Re: Qubes locks up half the time on startup with filenotfound error trying to run qubes manager

2018-09-14 Thread Marcus Linsner
On Thursday, September 13, 2018 at 7:43:57 PM UTC+2, Guy Frank wrote:
> On Tuesday, September 11, 2018 at 5:12:00 PM UTC-5, Guy Frank wrote:
> > On Tuesday, September 11, 2018 at 4:29:02 PM UTC-5, awokd wrote:
> > > On Tue, September 11, 2018 7:10 pm, Guy Frank wrote:
> > > > On Tuesday, September 11, 2018 at 1:44:13 PM UTC-5, Guy Frank wrote:
> > > >
> > > >> Was a bit premature thinking that my qubes installation was stable.
> > > >> About half the time I start the system, it locks up and I am only able
> > > >> to access Dom0 (qubes manager will not open, nor will any qubes, even
> > > >> from command line).  The system gives a serious 'filenotfound' error
> > > >> msg.  I've looked at previous posts on problems like this, but my
> > > >> problem doesn't seem to fit what others reported--qubes.xml is not
> > > >> empty and disk utilization is minimal (or near 50% in one case).  The
> > > >> error message is:
> > > >>
> > > >> #
> > > >> Whoops.  A critical error has occurred.  This is most likely a bug in
> > > >> Qubes Manager
> > > >> FileNotFoundError:  [Errno 2]
> > > >> No such file or directory
> > > >> at line 9 of file /usr/bin/qubes-qube-manager #
> > > >>
> > > >>
> > > >> Line 9 reads:  load_entry_point('qubesmanager==4.0.16',
> > > >> 'console_scripts', 'qubes-qube-manager')()
> > > >>
> > > >>
> > > >> Ok, so the weird thing is that this works fine half the time.  On half
> > > >> of my boot ups, I don't encounter this problem.  So if there is no such
> > > >> file or directory, it's not there half the time.  qubes.xml looks good
> > > >> (to my untrained eyes), and df -h shows nothing at more than 1%
> > > >> utilization except for /dev/nvme0n1p1 mounted on /boot/efi which is 56%
> > > >> of 200MB.  nvme0n1p1 is, I believe, the GPT table?
> > > >>
> > > >> I'm worried about coming to rely on this installation if at some point
> > > >> the error doesn't go away every other reboot and becomes permanent.  Am
> > > >> trying updates now--maybe that will help.
> > > >>
> > > >> Guy
> > > >>
> > > >
> > > > Updating the software in dom0 doesn't make the problem disappear, though
> > > > now the main error message is:
> > > >
> > > > QubesDaemonCommunicationError: Failed to connect to qubesd service:
> > > > [Errno 2] No such file or directory
> > > > at line 9 of file /usr/bin/qubes-qube-manager
> > > 
> > > Nothing related earlier in the "sudo journalctl -e" log? Try "sudo
> > > systemctl restart qubesd"?
> > 
> > Thanks awokd!  I'll give these a try next time I run into the problem
> 
> Ok, so on my next reboot, it ran into this problem again.  I made a copy of 
> the journalctl log and tried to restart qubesd, to no effect.  
> 
> The attached file, jnlctlErr.txt, if you scroll down to 09:24:43, I think you 
> can see where the Qubes OS daemon fails.  It is immediately preceded by the 
> 1d.2 pci device worker failing, suggesting that something about this failure 
> is causing the daemon from starting (which occurs below the blank line I 
> added to the log). 1d.2 is a PCI Bridge, Intel Corp Device a332.  No idea 
> what exactly this is or how to find out (not a hardware person).  
> 
> One thing I thought of is the fact that there's a PS/2 card in the machine to 
> which a PS/2 keyboard & mouse are attached.  Neither has ever worked in Qubes 
> (though they worked in Windows), so maybe that's what's triggering the 
> problem?  Will do some testing.
> 
> When I attempt to start qubes daemon w/ sudo systemctl restart qubesd, 
> journalctl log shows other errors.  The qubes daemon doesn't get started and 
> I can't use the system.
> 
> What I can do is reboot.  And about every other time, Qubes comes up and is 
> fine.  My concern is that at some point it'll stop doing this, so I'd really 
> like to figure out how to solve this problem.
> 
> Guy

Looking the the relevant errors, in context (and the time between them):

...
Sep 13 09:20:23 localhost kernel: usb 1-10.1: New USB device found, 
idVendor=413c, idProduct=2002
Sep 13 09:20:23 localhost kernel: usb 1-10.1: New USB device strings: Mfr=1, 
Product=2, SerialNumber=0
Sep 13 09:20:23 localhost kernel: usb 1-10.1: Product: Dell USB Keyboard Hub
Sep 13 09:20:23 localhost kernel: usb 1-10.1: Manufacturer: Dell
Sep 13 09:20:23 localhost kernel: input: Dell Dell USB Keyboard Hub as 
/devices/pci:00/:00:14.0/usb1/1-10/1-10.1/1-10.1:1.0/0003:413C:2002.0001/input/input3
Sep 13 09:20:23 localhost kernel: hid-generic 0003:413C:2002.0001: 
input,hidraw0: USB HID v1.10 Keyboard [Dell Dell USB Keyboard Hub] on 
usb-:00:14.0-10.1/input0
Sep 13 09:20:23 localhost kernel: input: Dell Dell USB Keyboard Hub as 
/devices/pci:00/:00:14.0/usb1/1-10/1-10.1/1-10.1:1.1/0003:413C:2002.0002/input/input4
Sep 13 09:20:23 localhost kernel: usb 4-3: new low-speed USB device number 2 
using ohci-pci
...
Sep 13 09:20:23 localhost kernel: hid-generic 0003:413C:2002.0002: 
input,hidraw1: USB HID v1.10 Device [Dell Dell USB Keyboard Hub] on 

[qubes-users] Re: 'No Bootable Device' error after clean Qubes 4 install

2018-09-07 Thread Marcus Linsner
On Friday, September 7, 2018 at 12:27:04 PM UTC+2, Marcus Linsner wrote:
> Actually I just looked, my MBR starts with lots of zeroes until the 
> partitions are defined

In fact the MBR contains the definition of only one (dummy)partition, since 
it's GPT, like:

[ctor@dom0 ~]$ sudo dd if=/dev/sda of=here.mbr bs=512 count=1
1+0 records in
1+0 records out
512 bytes copied, 8.5102e-05 s, 6.0 MB/s
[ctor@dom0 ~]$ fdisk -l here.mbr 
GPT PMBR size mismatch (468862127 != 0) will be corrected by w(rite).
Disk here.mbr: 512 B, 512 bytes, 1 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x

Device Boot Start   End   Sectors   Size Id Type
here.mbr1   1 468862127 468862127 223.6G ee GPT

but this doesn't matter.

I see the 'Boot' field is present only here (when type is dos, but not when 
type is gpt)

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/6a7c26a9-ea0c-49c7-961a-04ce350ba4d0%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: 'No Bootable Device' error after clean Qubes 4 install

2018-09-07 Thread Marcus Linsner
On Thursday, September 6, 2018 at 9:03:48 PM UTC+2, Guy Frank wrote:
> I did a fully automatic disk partition on my last attempt to install Qubes 4. 
>  When I try to boot my new Qubes install, I get a 'no bootable device' error. 
>  I looked at the partitioning scheme using a live usb drive and it shows a 
> /boot partition, with EFI and GRUB information and a large encrypted 
> partition, which presumably holds / and swap.  It may be relevant that the 
> installation was on a 500GB SSD drive and that there is also a 2TB hard disk 
> in the system.  I used gparted to delete all partitions from both devices 
> before installing Qubes.  The 2TB device is entirely unallocated and using 
> BIOS to turn off recognition of everything but the SSD drive has no effect.  
> Also the system indicates it has UEFI firmware.
> 
> I'm not very familiar w/ how the boot process works, but had thought there 
> would need to be a GPT table or MBR on the disk, but automatic boot doesn't 
> put one there.  In previous attempts to install, I tried to create an ESP 
> (GPT) partition, but the Qubes installer would not permit this (and doesn't 
> have it as an option).  In another attempt, I added a BIOSBOOT partition (for 
> MBR table, I presume) of 1MB.  Installation halts at post-installation (about 
> half way through) and never completes.
> 
> Any suggestions?

I'm one, but it may not be any good.

Try to enter BIOS and somehow select UEFI boot which should list "Qubes" as the 
boot device. I don't think this needs any MBR(code) as long as the partitions 
exist, like:

$ sudo fdisk -l /dev/sda
Disk /dev/sda: 223.6 GiB, 240057409536 bytes, 468862128 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: D72B9D42-215E-483B-82A3-28CA53959280

Device   Start   End   Sectors   Size Type
/dev/sda1 2048   1640447   1638400   800M EFI System
/dev/sda2  1640448 468860927 467220480 222.8G Linux filesystem

See none is marked as bootable (would be an "*")
the EFI System is a vfat partition:
$ lsblk|grep sda
sda 
  8:00 223.6G  0 disk  
├─sda2  
  8:20 222.8G  0 part  
└─sda1  
  8:10   800M  0 part  /boot/efi

The point is, BIOS should be capable itself of reading your /dev/sda1 FAT 
partition and boot from it(in UEFI mode) without needing to run any MBR 
code(aka Legacy). Actually I just looked, my MBR starts with lots of zeroes 
until the partitions are defined, but the very next sector starts with "EFI 
PART" so maybe there's some kind of stamp that's required? I don't know, I'm 
new to all this EFI thing.

In BIOS you may have two settings something like "Legacy" (needs MBR code and a 
partition set active in order to succeed booting - but maybe Qubes Installer 
didn't prepare for this if it detected you system supports UEFI? I'm guessing 
here) and something like "UEFI". See if you can try to boot in UEFI mode. 
That's my suggestion basically.


See also: https://www.qubes-os.org/doc/uefi-troubleshooting/


-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/91ffb974-07f2-4fa4-954c-125b5a134286%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: How to cache these disk reads done by the likes of [14.xvda-0]

2018-09-06 Thread Marcus Linsner
On Wednesday, September 5, 2018 at 4:26:11 AM UTC+2, Marcus Linsner wrote:
> Regarding OP, maybe I should look into this: 
> https://www.kernel.org/doc/Documentation/filesystems/caching/fscache.txt
> I'm unsure if it'll work for what I want, yet.
> 
> On Tuesday, August 28, 2018 at 10:35:23 PM UTC+2, Marcus Linsner wrote:
> > Side question: how can I send eg. sysrq+m to a qube? (seems not possible 
> > according to this 2016 post https://phabricator.whonix.org/T553#10438 ? )
> 
> Looks like it's possible, from dom0:
> 
> $ xl sysrq
> 'xl sysrq' requires at least 2 arguments.
> 
> Usage: xl [-vf] sysrq  
> 
> Send a sysrq to a domain.
> 
> 
> Or, maybe not:
> [   61.904917] xen:manage: sysrq_handler: Error -13 writing sysrq in 
> control/sysrq
> 
> I tried on a disposable VM, 's' and 'h'.
> $ sysctl kernel.sysrq
> already reports =1
> 
> On first glance I found the code responsible here: 
> https://lists.xenproject.org/archives/html/xen-devel/2018-06/msg01068.html
> if (sysrq_key != '\0') {
>   err = xenbus_printf(xbt, "control", "sysrq", "%c", '\0');
> ...
> I'm not sure if that looks right, I mean why isn't sysrq_key written instead 
> of just a '\0' ? perhaps I'm misreading this. I'm not a programmer :D

Ah sweet! There's a fix: 
https://xenbits.xen.org/gitweb/?p=xen.git;a=commitdiff;h=30a970906038a4d360e1f7ee29ba80ef832dd78b;hp=6de6c8d306c091eb7381575d250beaf2eeaf02df

I can't wait to test it whenever I figure out how to :D (possibly using 
qubes-builder to compile xen, ...)

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/20fa50d4-69ce-4ea0-9e56-6e100c744826%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: How to show boot entries?

2018-09-06 Thread Marcus Linsner
On Wednesday, September 5, 2018 at 1:37:51 PM UTC+2, Marcus Linsner wrote:
> On Wednesday, September 5, 2018 at 6:17:46 AM UTC+2, Marcus Linsner wrote:
> > On Thursday, March 15, 2018 at 7:08:25 AM UTC+1, coeu...@gmail.com wrote:
> > > Hello, guys. 
> > > 
> > > I want to show boot entries so that I can select certain kernel to boot, 
> > > and I'm using EFI/qubes/xen.efi as boot binary. Currently, it will 
> > > directly boot the default kernel. Could anyone give some advices?
> > > 
> > > BTW, here is the reason: I have multiple kernels installed and 
> > > kernel-latest-4.15.6-1 may raise kernel panic errors on Raven Ridge 
> > > platform, but kernel-4.14.18-1 works just fine.
> > > 
> > > Thanks!
> > > D.F.
> > 
> > I don't understand why there are multiple entries in xen.cfg if the only 
> > way to select any is by setting the default= to one of them.
> > 
> > So, I had to make a copy of the qubes/ folder where xen.cfg is located, 
> > then modify the copied xen.cfg to choose a different kernel. Then add a new 
> > boot entry (which I can only select to boot from by entering BIOS btw), 
> > which will be set as default when added by this command:
> > 
> > first see what we have:
> > $sudo efibootmgr -v
> > then add one more (BIOS-visible) entry:
> > $ sudo efibootmgr -v -c -u -L Mewbs -l /EFI/mewbs/xen.efi -d /dev/sda -p 1
> > then see what happened:
> > $ sudo efibootmgr -v
> > 
> > (I'd copy/paste but it's harder to do from dom0 and I'm currently 
> > lazy/tired. #notproud)
> Alright, it looks like it's easier than I thought, copy/pasting from dom0 
> (was previously using qvm-copy-to-vm), according to 
> https://www.qubes-os.org/doc/copy-from-dom0/ , step 3 (for Qubes 4.0), to 
> quote from there:
> "In other versions, write the data you wish to copy into 
> /var/run/qubes/qubes-clipboard.bin, then echo -n dom0 > 
> /var/run/qubes/qubes-clipboard.bin.source. Then use Ctrl-Shift-V to paste the 
> data to the desired VM."
> 
> There is another file /var/run/qubes/qubes-clipboard.bin.xevent which 
> contains a number and it doesn't need to be modified or touched for the 
> copy/pasting to work.
> 
> With that in mind, let's see how to add another UEFI entry (which, as a 
> reminder, can only be selected from BIOS's Boot Menu - which in my case 
> requires fully entering BIOS - there's no F12 key (but maybe it depends on 
> settings, like secure boot must be disabled?)).
> Let's add an entry which boots with smt=on to enable all cores, thus reducing 
> security, according to: https://www.qubes-os.org/news/2018/09/02/qsb-43/
> 
> Quick help for reference:
> 
> [ctor@dom0 ~]$ sudo efibootmgr -h
> efibootmgr version 14
> usage: efibootmgr [options]
>   -a | --active sets bootnum active
>   -A | --inactive   sets bootnum inactive
>   -b | --bootnum    modify Boot (hex)
>   -B | --delete-bootnum delete bootnum
>   -c | --create create new variable bootnum and add to bootorder
>   -C | --create-only  create new variable bootnum and do not add to 
> bootorder
>   -D | --remove-dups  remove duplicate values from BootOrder
>   -d | --disk disk   (defaults to /dev/sda) containing loader
>   -r | --driver Operate on Driver variables, not Boot Variables.
>   -e | --edd [1|3|-1]   force EDD 1.0 or 3.0 creation variables, or guess
>   -E | --device num  EDD 1.0 device number (defaults to 0x80)
>   -g | --gptforce disk with invalid PMBR to be treated as GPT
>   -i | --iface name create a netboot entry for the named interface
>   -l | --loader name (defaults to \EFI\redhat\grub.efi)
>   -L | --label label Boot manager display label (defaults to "Linux")
>   -m | --mirror-below-4G t|f mirror memory below 4GB
>   -M | --mirror-above-4G X percentage memory to mirror above 4GB
>   -n | --bootnext    set BootNext to  (hex)
>   -N | --delete-bootnext delete BootNext
>   -o | --bootorder ,,,... explicitly set BootOrder (hex)
>   -O | --delete-bootorder delete BootOrder
>   -p | --part part(defaults to 1) containing loader
>   -q | --quietbe quiet
>   -t | --timeout seconds  set boot manager timeout waiting for user input.
>   -T | --delete-timeout   delete Timeout.
>   -u | --unicode | --UCS-2  pass extra args as UCS-2 (default is ASCII)
>   -v | --verbose  print additional information
>   -V | --version  return version and exit
>   -w | --write-signature  write unique si

[qubes-users] Re: How to show boot entries?

2018-09-06 Thread Marcus Linsner
On Wednesday, September 5, 2018 at 1:37:51 PM UTC+2, Marcus Linsner wrote:
> With that in mind, let's see how to add another UEFI entry (which, as a 
> reminder, can only be selected from BIOS's Boot Menu - which in my case 
> requires fully entering BIOS - there's no F12 key (but maybe it depends on 
> settings, like secure boot must be disabled?)).

A slight correction here, the boot key(for my mobo/BIOS) is F8 (not F12 as I 
wrongly assumed from experience with other PCs) and it doesn't require entering 
BIOS I found out, however it only shows the boot menu if the BIOS 
Admin(supervisor?) password(as opposed to BIOS User password) is entered when 
the PC is set to prompt for a password before boot. Perhaps this depends on 
some BIOS settings, so that it would otherwise work with the user password too.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/b5b132a8-3bc5-4a29-b534-fc5cb8f1a854%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: How to show boot entries?

2018-09-05 Thread Marcus Linsner
On Wednesday, September 5, 2018 at 6:17:46 AM UTC+2, Marcus Linsner wrote:
> On Thursday, March 15, 2018 at 7:08:25 AM UTC+1, coeu...@gmail.com wrote:
> > Hello, guys. 
> > 
> > I want to show boot entries so that I can select certain kernel to boot, 
> > and I'm using EFI/qubes/xen.efi as boot binary. Currently, it will directly 
> > boot the default kernel. Could anyone give some advices?
> > 
> > BTW, here is the reason: I have multiple kernels installed and 
> > kernel-latest-4.15.6-1 may raise kernel panic errors on Raven Ridge 
> > platform, but kernel-4.14.18-1 works just fine.
> > 
> > Thanks!
> > D.F.
> 
> I don't understand why there are multiple entries in xen.cfg if the only way 
> to select any is by setting the default= to one of them.
> 
> So, I had to make a copy of the qubes/ folder where xen.cfg is located, then 
> modify the copied xen.cfg to choose a different kernel. Then add a new boot 
> entry (which I can only select to boot from by entering BIOS btw), which will 
> be set as default when added by this command:
> 
> first see what we have:
> $sudo efibootmgr -v
> then add one more (BIOS-visible) entry:
> $ sudo efibootmgr -v -c -u -L Mewbs -l /EFI/mewbs/xen.efi -d /dev/sda -p 1
> then see what happened:
> $ sudo efibootmgr -v
> 
> (I'd copy/paste but it's harder to do from dom0 and I'm currently lazy/tired. 
> #notproud)
Alright, it looks like it's easier than I thought, copy/pasting from dom0 (was 
previously using qvm-copy-to-vm), according to 
https://www.qubes-os.org/doc/copy-from-dom0/ , step 3 (for Qubes 4.0), to quote 
from there:
"In other versions, write the data you wish to copy into 
/var/run/qubes/qubes-clipboard.bin, then echo -n dom0 > 
/var/run/qubes/qubes-clipboard.bin.source. Then use Ctrl-Shift-V to paste the 
data to the desired VM."

There is another file /var/run/qubes/qubes-clipboard.bin.xevent which contains 
a number and it doesn't need to be modified or touched for the copy/pasting to 
work.

With that in mind, let's see how to add another UEFI entry (which, as a 
reminder, can only be selected from BIOS's Boot Menu - which in my case 
requires fully entering BIOS - there's no F12 key (but maybe it depends on 
settings, like secure boot must be disabled?)).
Let's add an entry which boots with smt=on to enable all cores, thus reducing 
security, according to: https://www.qubes-os.org/news/2018/09/02/qsb-43/

Quick help for reference:

[ctor@dom0 ~]$ sudo efibootmgr -h
efibootmgr version 14
usage: efibootmgr [options]
-a | --active sets bootnum active
-A | --inactive   sets bootnum inactive
-b | --bootnum    modify Boot (hex)
-B | --delete-bootnum delete bootnum
-c | --create create new variable bootnum and add to bootorder
-C | --create-only  create new variable bootnum and do not add to 
bootorder
-D | --remove-dups  remove duplicate values from BootOrder
-d | --disk disk   (defaults to /dev/sda) containing loader
-r | --driver Operate on Driver variables, not Boot Variables.
-e | --edd [1|3|-1]   force EDD 1.0 or 3.0 creation variables, or guess
-E | --device num  EDD 1.0 device number (defaults to 0x80)
-g | --gptforce disk with invalid PMBR to be treated as GPT
-i | --iface name create a netboot entry for the named interface
-l | --loader name (defaults to \EFI\redhat\grub.efi)
-L | --label label Boot manager display label (defaults to "Linux")
-m | --mirror-below-4G t|f mirror memory below 4GB
-M | --mirror-above-4G X percentage memory to mirror above 4GB
-n | --bootnext    set BootNext to  (hex)
-N | --delete-bootnext delete BootNext
-o | --bootorder ,,,... explicitly set BootOrder (hex)
-O | --delete-bootorder delete BootOrder
-p | --part part(defaults to 1) containing loader
-q | --quietbe quiet
-t | --timeout seconds  set boot manager timeout waiting for user input.
-T | --delete-timeout   delete Timeout.
-u | --unicode | --UCS-2  pass extra args as UCS-2 (default is ASCII)
-v | --verbose  print additional information
-V | --version  return version and exit
-w | --write-signature  write unique sig to MBR if needed
-y | --sysprep  Operate on SysPrep variables, not Boot 
Variables.
-@ | --append-binary-args file  append extra args from file (use "-" 
for stdin)
-h | --help show help/usage

Let's see what we have already:

[ctor@dom0 ~]$ sudo efibootmgr -v 
BootCurrent: 0002
Timeout: 1 seconds
BootOrder: ,0002
Boot* Qubes 
HD(1,GPT,a8c00c7c-aa3d-4418-8e1a-d3c5c158ac2d,0x800,

[qubes-users] Re: System freezes several times a day

2018-09-04 Thread Marcus Linsner
On Monday, September 25, 2017 at 5:35:44 PM UTC+2, Jan Martin Krämer wrote:
> I am not sure if magic sysrq is enabled on qubes, I guess I would have to 
> test it while it is still working, but if it is, it also didn't work.

On dom0, looks like only the sync command(sysrq+s) of sysrq is enabled by 
default:

[ctor@dom0 ~]$ grep -nH sysrq -- /usr/lib/sysctl.d/* /etc/sysctl.d/*
/usr/lib/sysctl.d/50-default.conf:16:# Use kernel.sysrq = 1 to allow all keys.
/usr/lib/sysctl.d/50-default.conf:18:kernel.sysrq = 16
/etc/sysctl.d/95-sysrq.conf:1:kernel.sysrq=1

I actually had to enable all by creating /etc/sysctl.d/95-sysrq.conf (as seen 
above).

All info from here: 
https://www.kernel.org/doc/Documentation/admin-guide/sysrq.rst

   -  0 - disable sysrq completely
   -  1 - enable all functions of sysrq
   - >1 - bitmask of allowed sysrq functions (see below for detailed function
 description)::

  2 =   0x2 - enable control of console logging level
  4 =   0x4 - enable control of keyboard (SAK, unraw)
  8 =   0x8 - enable debugging dumps of processes etc.
 16 =  0x10 - enable sync command
 32 =  0x20 - enable remount read-only
 64 =  0x40 - enable signalling of processes (term, kill, oom-kill)
128 =  0x80 - allow reboot/poweroff
256 = 0x100 - allow nicing of all RT tasks

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/d589d40a-805c-468c-acce-fbba707cbfa9%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: How to cache these disk reads done by the likes of [14.xvda-0]

2018-09-04 Thread Marcus Linsner
Regarding OP, maybe I should look into this: 
https://www.kernel.org/doc/Documentation/filesystems/caching/fscache.txt
I'm unsure if it'll work for what I want, yet.

On Tuesday, August 28, 2018 at 10:35:23 PM UTC+2, Marcus Linsner wrote:
> Side question: how can I send eg. sysrq+m to a qube? (seems not possible 
> according to this 2016 post https://phabricator.whonix.org/T553#10438 ? )

Looks like it's possible, from dom0:

$ xl sysrq
'xl sysrq' requires at least 2 arguments.

Usage: xl [-vf] sysrq  

Send a sysrq to a domain.


Or, maybe not:
[   61.904917] xen:manage: sysrq_handler: Error -13 writing sysrq in 
control/sysrq

I tried on a disposable VM, 's' and 'h'.
$ sysctl kernel.sysrq
already reports =1

On first glance I found the code responsible here: 
https://lists.xenproject.org/archives/html/xen-devel/2018-06/msg01068.html
if (sysrq_key != '\0') {
  err = xenbus_printf(xbt, "control", "sysrq", "%c", '\0');
...
I'm not sure if that looks right, I mean why isn't sysrq_key written instead of 
just a '\0' ? perhaps I'm misreading this. I'm not a programmer :D



-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/6ed2a879-4ee1-42bc-914b-213eec9ac48a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] systemd replacement for dom0

2018-09-04 Thread Marcus Linsner
On Monday, September 3, 2018 at 1:45:27 PM UTC+2, Rusty Bird wrote:

> Marcus Linsner:
> > I'm mainly asking because I fail to make certain services stop in a
> > certain order at reboot/shutdown. Hmm, maybe I should focus on
> > starting them in a certain order? then maybe shutdown will do it in
> > reverse order [...]
> 
> Yes, that's how systemd does it. See Before= and After= in the
> systemd.unit manpage.

I know it seems simple, and yet in practice it doesn't work (or rather, 
sometimes it works, mostly it doesn't). 
For example: In dom0, I added `udisks2.service` on the `After=` line of 
`qubes-core.service` (did do the required `sudo systemctl daemon-reload` - 
looks like `sudo` isn't required, thanks to passwordless sudo I suppose) then I 
the next reboot (from xfce Menu->Logout->Restart) seems to have worked in the 
sense that there were no more timeouts reported on the log, then the next 
reboot the timeouts were back. Seen here: 
https://gist.github.com/constantoverride/61207ebf622263d25fb1e5b1f11d0148/9443ef0babe29b32622dd0f17a2435a9e0f1df92#gistcomment-2696596

Another example, instead of `udisks2.service` I added `dev-block-253:0.device` 
on the `After=` line of `qubes-core.service`, then the next TWO reboots were 
fine!(aka no `timed out` or `timeout` reported, but on the third reboot the 
timeouts were back, without me changing anything.
All of that seen here: 
https://gist.github.com/constantoverride/61207ebf622263d25fb1e5b1f11d0148/9443ef0babe29b32622dd0f17a2435a9e0f1df92#gistcomment-2696607
(the contents of `qubes-core.service` can be seen one or two pages above the 
linked comment; the actual log of the reboots are linked to in the comment)

Maybe it's a systemd bug? but since dom0 is stuck with Fedora 25 (as opposed to 
Fedora 28), then I cannot get it upgraded, I believe.

Another example, I even tried replacing `dev-block-253:0.device` with 
`system.device` (later with `system.device systemd.device`) but there were no 
reboots (so far) that didn't time out.

Anyway, the point is: I don't know how to make use of systemd so that this 
stalling doesn't happen on reboot/shutdown 
https://github.com/QubesOS/qubes-issues/issues/1581#issuecomment-417968474


:)

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/09a1b66f-1b35-4ee0-9e27-acacda95fb31%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] systemd replacement for dom0

2018-09-02 Thread Marcus Linsner
Has anyone tried (and succeeded) replacing dom0's systemd with something else 
that's not systemd ? is it even doable with Qubes?

I'm mainly asking because I fail to make certain services stop in a certain 
order at reboot/shutdown. Hmm, maybe I should focus on starting them in a 
certain order? then maybe shutdown will do it in reverse order, rather then 
seemingly all at once.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/8a41cb59-2e93-4f9c-a55e-0bd18881d9e4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: Incredible HD thrashing on 4.0

2018-08-29 Thread Marcus Linsner
On Friday, August 17, 2018 at 2:57:31 AM UTC+2, Marcus Linsner wrote:
> "For example, consider a case where you have zero swap and system is nearly 
> running out of RAM. The kernel will take memory from e.g. Firefox (it can do 
> this because Firefox is running executable code that has been loaded from 
> disk - the code can be loaded from disk again if needed). If Firefox then 
> needs to access that RAM again N seconds later, the CPU generates "hard 
> fault" which forces Linux to free some RAM (e.g. take some RAM from another 
> process), load the missing data from disk and then allow Firefox to continue 
> as usual. This is pretty similar to normal swapping and kswapd0 does it.  " - 
> Mikko Rantalainen Feb 15 at 13:08

Good news: no more disk thrashing with this patch [1] (also attached) and I'm 
keeping track of how to properly get rid of this disk thrashing in this [2].

Bad news: I made the patch and I've no idea how good it is(since I am noob :D) 
and what are the side-effects of using it. Likely a better patch can be made! 
(but none who know how to do it right have answered/helped yet :D so ... it's, 
for me, better than nothing)

I'm not going to post here anymore, to allow OP to be answered (since, it seems 
to be a different issue)

[1] 
https://github.com/constantoverride/qubes-linux-kernel/blob/acd686a5019c7ab6ec10dc457bdee4830e2d741f/patches.addon/le9b.patch
[2] https://stackoverflow.com/q/52067753/10239615

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/7b9ba803-0e87-4525-9d8e-2f256ffc5122%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 32699b2..7636498 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -208,7 +208,7 @@ enum lru_list {
 
 #define for_each_lru(lru) for (lru = 0; lru < NR_LRU_LISTS; lru++)
 
-#define for_each_evictable_lru(lru) for (lru = 0; lru <= LRU_ACTIVE_FILE; lru++)
+#define for_each_evictable_lru(lru) for (lru = 0; lru <= LRU_INACTIVE_FILE; lru++)
 
 static inline int is_file_lru(enum lru_list lru)
 {
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 03822f8..1f3ffb5 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2234,7 +2234,7 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
 
 	anon  = lruvec_lru_size(lruvec, LRU_ACTIVE_ANON, MAX_NR_ZONES) +
 		lruvec_lru_size(lruvec, LRU_INACTIVE_ANON, MAX_NR_ZONES);
-	file  = lruvec_lru_size(lruvec, LRU_ACTIVE_FILE, MAX_NR_ZONES) +
+	file  = //lruvec_lru_size(lruvec, LRU_ACTIVE_FILE, MAX_NR_ZONES) +
 		lruvec_lru_size(lruvec, LRU_INACTIVE_FILE, MAX_NR_ZONES);
 
 	spin_lock_irq(>lru_lock);
@@ -2345,7 +2345,7 @@ static void shrink_node_memcg(struct pglist_data *pgdat, struct mem_cgroup *memc
 			 sc->priority == DEF_PRIORITY);
 
 	blk_start_plug();
-	while (nr[LRU_INACTIVE_ANON] || nr[LRU_ACTIVE_FILE] ||
+	while (nr[LRU_INACTIVE_ANON] || //nr[LRU_ACTIVE_FILE] ||
 	nr[LRU_INACTIVE_FILE]) {
 		unsigned long nr_anon, nr_file, percentage;
 		unsigned long nr_scanned;
@@ -2372,7 +2372,8 @@ static void shrink_node_memcg(struct pglist_data *pgdat, struct mem_cgroup *memc
 		 * stop reclaiming one LRU and reduce the amount scanning
 		 * proportional to the original scan target.
 		 */
-		nr_file = nr[LRU_INACTIVE_FILE] + nr[LRU_ACTIVE_FILE];
+		nr_file = nr[LRU_INACTIVE_FILE] //+ nr[LRU_ACTIVE_FILE]
+			;
 		nr_anon = nr[LRU_INACTIVE_ANON] + nr[LRU_ACTIVE_ANON];
 
 		/*
@@ -2391,7 +2392,8 @@ static void shrink_node_memcg(struct pglist_data *pgdat, struct mem_cgroup *memc
 			percentage = nr_anon * 100 / scan_target;
 		} else {
 			unsigned long scan_target = targets[LRU_INACTIVE_FILE] +
-		targets[LRU_ACTIVE_FILE] + 1;
+		//targets[LRU_ACTIVE_FILE] + 
+		1;
 			lru = LRU_FILE;
 			percentage = nr_file * 100 / scan_target;
 		}


[qubes-users] How to cache these disk reads done by the likes of [14.xvda-0]

2018-08-28 Thread Marcus Linsner
How to cache these disk reads that are reported by `sudo iotop` on dom0 as:

Total DISK READ : 188.51 M/s | Total DISK WRITE :   0.00 B/s
Actual DISK READ: 188.92 M/s | Actual DISK WRITE:   0.00 B/s
  TID  PRIO  USER DISK READ  DISK WRITE  SWAPIN IO>COMMAND  

13206 be/4 root   51.63 M/s0.00 B/s  0.00 %  0.00 % [14.xvda-0]
13207 be/4 root   54.63 M/s0.00 B/s  0.00 %  0.00 % [14.xvda-1]
13209 be/4 root   49.11 M/s0.00 B/s  0.00 %  0.00 % [14.xvda-3]
13208 be/4 root   33.13 M/s0.00 B/s  0.00 %  0.00 % [14.xvda-2]

What's happening here is that some qube(Fedora 28 AppVM) is under memory 
pressure (/about to run out of memory) and due to how lame kernel is handling 
that, it does a lot of uncached reads(for whatever reason - yet to be fully 
determined) well before the in-AppVM Linux kernel OOM-killer triggers to kill 
the process using most RAM. This effectively freezes everything in that AppVM 
(none of the windows(like terminals) get updated, it can be 
Paused/Resumed/Killed from Qube Manager though). This goes on for plenty of 
minutes, I've yet to see it trigger the OOM-killer. (unless I use 12000MB RAM 
instead of 4000MB, then I can Pause/Unpause several times and it triggers; but 
never with 4000MB)

So I figure, if, even temporarily, Xen (or what? probably not dom0) would be 
able to cache these reads that the AppVM is doing (rather than count on that 
VM's OS to cache the reads(which it can no longer do when under memory 
pressure) then that out of memory point (inside that AppVM) would be reached 
sooner and hopefully no AppVM OS freeze would be seen.
Is it possible? How?

Side question: how can I send eg. sysrq+m to a qube? (seems not possible 
according to this 2016 post https://phabricator.whonix.org/T553#10438 ? )

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/709b48c4-0b72-4674-b27c-40e7fc2736c4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: How to ccache kernel compilations

2018-08-25 Thread Marcus Linsner
On Friday, August 24, 2018 at 1:14:09 PM UTC+2, Marcus Linsner wrote:
> For posterity, the modifications (applied on top of 'qubes-linux-kernel' 
> repo's tag 'v4.14.57-2') that I used to achieve the above, are here:
> https://github.com/constantoverride/qubes-linux-kernel/commit/ac9a975512bdc67dc12c948355b14dfdcc229b1a
> (also attached just in case github goes away, somehow)

The way I tried to compile kernel in this thread was wrong(because installing 
it in dom0 would fail due to compilation VM being Fedora 28 instead of 25 and 
thus missing some new libs; on a Fedora 25 VM compilation would fail). 

The right way to compile a VM (and dom0?) kernel is by using qubes-builder 
(which chroots to a fc25(aka Fedora 25, which is what dom0 is on) even though 
we're running inside a Fedora 28 VM): thanks to fepitre for telling me the 
steps here 
https://github.com/QubesOS/qubes-linux-kernel/pull/22#issuecomment-415453140

I'll keep track of my kernel compilation progress here: 
https://gist.github.com/constantoverride/825717e0136f804aa6ebf66293234b57
(like making ccache work for this version of compilation steps)

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/ab3b33ab-5519-4c68-95bb-18aee89b8731%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: about [Dom0] Session and Startup - Application Autostart items

2018-08-24 Thread Marcus Linsner
On Friday, August 17, 2018 at 12:43:29 PM UTC+2, Marcus Linsner wrote:
> All the app links in xfce4's "[Dom0] Session and Startup" window under the 
> tab "Application Autostart" (see screenshot) cannot be Edit-ed which is 
> probably because they reside in /etc/xdg/autostart/ as *.desktop files; 
> another way to see what command they execute is looking at the tooltip shown 
> by hovering the mouse on them.
> 
> I needed to see what's the command for that blue "Q" in systray(aka 
> Notification Area) because it went away after some dialog popped up.
> 
> The answer is: it's one of the "Domains Tray" items(there are two) whose 
> command is:
> $ python3 -mqui.tray.domains &
> (added the "&" to let it go into background for when running it inside the 
> dom0 terminal; without the "&", Ctrl+Z then "$ bg" also works)
> 
> This post was supposed to be a question but before posting it I've figured it 
> out, but I'm still posting it just in case it might be useful to someone or 
> even future me.

When that blue 'Q' in systray crashes, the dialog(which I missed the first 
time) tells me to restart it by command: qui-domains
and it works! though the command should have an & at the end, because it's 
blocking, so, in dom0:
$ qui-domains &

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/25149afa-01e5-4588-a47f-8235b982793d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] Error installing qubes-whonix-workstation-gnome (TemplateVM)

2018-08-24 Thread Marcus Linsner
On Thursday, March 19, 2015 at 7:08:58 AM UTC+1, WhonixQubes wrote:
> On 2015-03-19 5:37 am, Iestyn Best wrote:
> > Hi,
> > 
> > I have been following the work of Qubes-OS for a short while now and 
> > have
> > finally installed it on a new company laptop.
> > 
> > I was trying to install the Whonix templates so that I can play with 
> > them
> > but I am getting the following error:
> > 
> > qfile-agent: Fatal error: File copy: Disk quota exceeded; Last file:
> > qubes-template-whonix-workstation-gnome-2.1.8-201503092029.noarch.rpm
> > (error type: Disk quota exceeded)
> > 
> > I have been able to install the Gateway template as long as I did it by
> > itself.
> > 
> > Any help you may be able to provide would be greatly appreciated.
> > 
> > Regards,
> > Iestyn Best
> 
> 
> Hi Iestyn,
> 
> Try increasing the size of your UpdateVM (firewallvm) with step #2 for 
> the Whonix-Workstation install here:
> 
> https://www.whonix.org/wiki/Qubes/Binary_Install
> 
> 
> WhonixQubes

What is step #2 now? I can't find it.
What I found out is that the disk space for root aka / is the one that's being 
used up and when it reaches about 7.8G Used (up from like 4.4 Used) then I'm 
getting that `Disk quota exceeded.` message. However if I restart the 
sys-firewall qube, the space is down to 4.4 Used again (for obvious reasons of 
how Qubes works) and the message won't be shown again(unless you want to 
install 3 TemplateVMs consecutively without restarting `sys-firewall`, I 
suppose). So this 10G total space for / is apparently setable from `[Dom0] Qube 
Manager` in the `Qube settings` for `sys-firewall`, on the `Basic` tab under 
`Disk storage` (in Qubes R4.0 anyway), it's `System storage max. size:` but 
it's grayed out which means it cannot be modified (even when the qube is shut 
down). So I've no idea how to change that and if it even makes sense to be able 
to change it... oh wait... that's what I thought: it can be changed only in the 
TemplateVM that `sys-firewall` uses. Cool! 20G should do, until next reboot. A 
tooltip would be nice for the grayed out items saying they can only be changed 
in the TemplateVM ;-)

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/55541573-2cac-4fdb-be2f-c18efeaeeb61%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: How to ccache kernel compilations

2018-08-24 Thread Marcus Linsner
On Friday, August 24, 2018 at 12:13:01 PM UTC+2, Marcus Linsner wrote:
> On Friday, August 24, 2018 at 11:24:27 AM UTC+2, Marcus Linsner wrote:
> > This is how a full(well, slightly modified) kernel compilation looks like 
> > now, with ccache working:
> > ie. `time make rpms`
> > real7m47.483s
> > user9m2.507s
> > sys 6m47.245s
> > 
> > cache directory /home/user/.ccache
> > primary config  /home/user/.ccache/ccache.conf
> > secondary config  (readonly)/etc/ccache.conf
> > stats zero time Fri Aug 24 11:09:03 2018
> > cache hit (direct) 14047
> > cache hit (preprocessed)   1
> > cache miss 8
> > cache hit rate 99.94 %
> > called for link   47
> > called for preprocessing   21125
> > unsupported code directive 4
> > no input file   1092
> > cleanups performed 0
> > files in cache 42606
> > cache size 865.4 MB
> > max cache size  20.0 GB
> > 
> > The build phase actually takes only 2min (for 14k files):
> > real2m1.674s
> > user5m28.075s
> > sys 4m50.768s
> > 
> > 
> > cache directory /home/user/.ccache
> > primary config  /home/user/.ccache/ccache.conf
> > secondary config  (readonly)/etc/ccache.conf
> > stats zero time Fri Aug 24 11:17:37 2018
> > cache hit (direct) 14011
> > cache hit (preprocessed)   0
> > cache miss 5
> > cache hit rate 99.96 %
> > called for link   28
> > called for preprocessing   21069
> > unsupported code directive 4
> > no input file342
> > cleanups performed 0
> > files in cache 42616
> > cache size 865.6 MB
> > max cache size  20.0 GB

For posterity, the modifications (applied on top of 'qubes-linux-kernel' repo's 
tag 'v4.14.57-2') that I used to achieve the above, are here:
https://github.com/constantoverride/qubes-linux-kernel/commit/ac9a975512bdc67dc12c948355b14dfdcc229b1a
(also attached just in case github goes away, somehow)

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/4806bbad-04a5-4463-95a6-d0b8c485267f%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.
commit ac9a975512bdc67dc12c948355b14dfdcc229b1a
Author: constantoverride 
Date:   Fri Aug 24 12:49:27 2018 +0200

made ccache work; personalized .config a lil

also applied missing patches to avoid compilation errors when gcc plugins are enabled

diff --git a/.gitignore b/.gitignore
index deea910..99f843f 100644
--- a/.gitignore
+++ b/.gitignore
@@ -2,3 +2,4 @@ linux-*.tar.bz2
 linux-*.tar.xz
 linux-*.sign
 kernel-*/
+u2mfn/
diff --git a/Makefile b/Makefile
index 25f00e0..6853ea9 100644
--- a/Makefile
+++ b/Makefile
@@ -93,9 +93,17 @@ rpms-dom0: get-sources $(SPECFILE)
 rpms-nobuild:
 	$(RPM_WITH_DIRS) --nobuild -bb $(SPECFILE)
 
-rpms-just-build: 
+rpms-just-cleanbuild:
+	make clean -C kernel-4.14.57/linux-obj
+
+rpms-just-build-clean: rpms-just-cleanbuild
+
+rpms-just-build:
 	$(RPM_WITH_DIRS) --short-circuit -bc $(SPECFILE)
 
+rpms-just-install:
+	$(RPM_WITH_DIRS) --short-circuit -bi $(SPECFILE)
+
 rpms-install: 
 	$(RPM_WITH_DIRS) -bi $(SPECFILE)
 
@@ -110,8 +118,8 @@ verrel:
 
 # mop up, printing out exactly what was mopped.
 
-.PHONY : clean
-clean ::
+.PHONY : rpmclean
+rpmclean ::
 	@echo "Running the %clean script of the rpmbuild..."
 	$(RPM_WITH_DIRS) --clean --nodeps $(SPECFILE)
 
diff --git a/config-qubes-me b/config-qubes-me
new file mode 100644
index 000..6e46fae
--- /dev/null
+++ b/config-qubes-me
@@ -0,0 +1,43 @@
+## comments need doube # like ## !!!
+## single # are not comments!
+
+## remove AMD stuff:
+CONFIG_PROCESSOR_SELECT=y
+CONFIG_CPU_SUP_INTEL=y
+# CONFIG_CPU_SUP_AMD is not set
+# CONFIG_CPU_SUP_CENTAUR is not set
+## CONFIG_GART_IOMMU is not set
+## CONFIG_X86_MCE_AMD is not set
+## CONFIG_PERF_EVENTS_AMD_POWER is not set
+# CONFIG_MICROCODE_AMD is not set
+## CONFIG_AMD_MEM_ENCRYPT is not set
+## CONFIG_AMD_MEM_E

[qubes-users] Re: How to ccache kernel compilations

2018-08-24 Thread Marcus Linsner
On Friday, August 24, 2018 at 11:24:27 AM UTC+2, Marcus Linsner wrote:
> This is how a full(well, slightly modified) kernel compilation looks like 
> now, with ccache working:
> ie. `time make rpms`
> real  7m47.483s
> user  9m2.507s
> sys   6m47.245s
> 
> cache directory /home/user/.ccache
> primary config  /home/user/.ccache/ccache.conf
> secondary config  (readonly)/etc/ccache.conf
> stats zero time Fri Aug 24 11:09:03 2018
> cache hit (direct) 14047
> cache hit (preprocessed)   1
> cache miss 8
> cache hit rate 99.94 %
> called for link   47
> called for preprocessing   21125
> unsupported code directive 4
> no input file   1092
> cleanups performed 0
> files in cache 42606
> cache size 865.4 MB
> max cache size  20.0 GB
> 
> The build phase actually takes only 2min (for 14k files):
> real  2m1.674s
> user  5m28.075s
> sys   4m50.768s
> 
> 
> cache directory /home/user/.ccache
> primary config  /home/user/.ccache/ccache.conf
> secondary config  (readonly)/etc/ccache.conf
> stats zero time Fri Aug 24 11:17:37 2018
> cache hit (direct) 14011
> cache hit (preprocessed)   0
> cache miss 5
> cache hit rate 99.96 %
> called for link   28
> called for preprocessing   21069
> unsupported code directive 4
> no input file342
> cleanups performed 0
> files in cache 42616
> cache size 865.6 MB
> max cache size  20.0 GB

And for comparison, a full %build phase when CONFIG_GCC_PLUGINS is 
untouched(aka set):
real17m19.746s
user125m44.920s
sys 17m9.877s

cache directory /home/user/.ccache
primary config  /home/user/.ccache/ccache.conf
secondary config  (readonly)/etc/ccache.conf
stats zero time Fri Aug 24 11:27:18 2018
cache hit (direct)28
cache hit (preprocessed) 133
cache miss 13857
cache hit rate  1.15 %
called for link   30
called for preprocessing   21075
unsupported code directive 4
no input file348
cleanups performed 0
files in cache 84685
cache size   1.7 GB
max cache size  20.0 GB

So you see, 15 more minutes than with ccache. Ok, maybe let's say that that was 
the first compilation with CONFIG_GCC_PLUGINS set (ie. cold cache?), so redoing 
it(make prep; ccache -z; time make rpms-just-build) means it should make use of 
the now primed ccache (ie. hot cache?):
real18m34.318s
user122m23.001s
sys 17m7.478s

cache directory /home/user/.ccache
primary config  /home/user/.ccache/ccache.conf
secondary config  (readonly)/etc/ccache.conf
stats zero time Fri Aug 24 11:46:30 2018
cache hit (direct)   160
cache hit (preprocessed)   2
cache miss 13856
cache hit rate  1.16 %
called for link   30
called for preprocessing   21075
unsupported code directive 4
no input file348
cleanups performed 0
files in cache126746
cache size   2.6 GB
max cache size  20.0 GB

It probably took one minute longer than before because I was using the other 
VMs for browsing (also started a few)
But you get the point, 1.2% ccache hit rate. Appalling! :D

On Friday, August 24, 2018 at 11:51:45 AM UTC+2, awokd wrote:
> Any idea what those GCC plugins are for? Seems like it's usually a hassle
> to track them down on distro version updates too.

According to 'config-qubes' file [1] they help "Enable some more hardening 
options"

According to 
'/home/user/qubes-linux-kernel/kernel-4.14.57/linux-4.14.57/arch/Kconfig' [2]:

menuconfig GCC_PLUGINS
bool "GCC plugins"
depends on HAVE_GCC_PLUGINS
depends on !COMPILE_TEST
help
  GCC plugins are loadable modules that provide extra features to the
  compiler. They are useful for runtime instrumentation and static 
analysis.

  See Documentation/gcc-plugins.txt for details.

(see url [3] at the end, for this gcc-plugins.txt)


[qubes-users] Re: How to ccache kernel compilations

2018-08-24 Thread Marcus Linsner
This is how a full(well, slightly modified) kernel compilation looks like now, 
with ccache working:
ie. `time make rpms`
real7m47.483s
user9m2.507s
sys 6m47.245s

cache directory /home/user/.ccache
primary config  /home/user/.ccache/ccache.conf
secondary config  (readonly)/etc/ccache.conf
stats zero time Fri Aug 24 11:09:03 2018
cache hit (direct) 14047
cache hit (preprocessed)   1
cache miss 8
cache hit rate 99.94 %
called for link   47
called for preprocessing   21125
unsupported code directive 4
no input file   1092
cleanups performed 0
files in cache 42606
cache size 865.4 MB
max cache size  20.0 GB

The build phase actually takes only 2min (for 14k files):
real2m1.674s
user5m28.075s
sys 4m50.768s


cache directory /home/user/.ccache
primary config  /home/user/.ccache/ccache.conf
secondary config  (readonly)/etc/ccache.conf
stats zero time Fri Aug 24 11:17:37 2018
cache hit (direct) 14011
cache hit (preprocessed)   0
cache miss 5
cache hit rate 99.96 %
called for link   28
called for preprocessing   21069
unsupported code directive 4
no input file342
cleanups performed 0
files in cache 42616
cache size 865.6 MB
max cache size  20.0 GB

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/d462f7f0-bf71-4a5f-b91a-69f68d803be4%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: How to ccache kernel compilations

2018-08-23 Thread Marcus Linsner
On Thursday, August 23, 2018 at 5:21:48 PM UTC+2, Marcus Linsner wrote:
> On Thursday, August 23, 2018 at 4:27:13 PM UTC+2, Marcus Linsner wrote:
> > I'm trying to use ccache to compile kernel(s) but after 1k files of cache 
> > miss, I see only 8% hit rate, even if I keep the compilation dir 
> > ('linux-obj'), do a clean (kernel Makefile's clean, not rpm's clean) and 
> > re-issue the just-build again after a ccache -z (to clear ccache stats).
> ok I'm on to something: it's the .config !
> If I'm using default .config (aka in the source folder `make mrproper; make 
> menuconfig`, Save then Exit) then copy that .config into ../linux-obj/ and 
> then execute this twice:
> $ time make clean;ccache -z; time make -j18
> I get ccache hit direct over 90% ! which is how it should be.
> 
> I'll post again if I find out exactly which options in the .config are the 
> ccache-busting culprits

Well, it's CONFIG_GCC_PLUGINS, it has to be unset for ccache to work. (else you 
get well under 8% hit rate, instead of like almost 100%)
and it's being set in the file `config-qubes` like so:
CONFIG_GCC_PLUGINS=y
CONFIG_GCC_PLUGIN_LATENT_ENTROPY=y
CONFIG_GCC_PLUGIN_STRUCTLEAK=y

it was also the reason these two patches were needed here: 
https://groups.google.com/forum/#!topic/qubes-devel/Q3cdQKQS4Tk
to avoid compilation failure when compiling kernel 4.14.57

Great, now ccache works even with `make rpms` !!
If anyone knows another way or why I should keep those gcc plugins, lemme know? 
:D

cache directory /home/user/.ccache
primary config  /home/user/.ccache/ccache.conf
secondary config  (readonly)/etc/ccache.conf
stats zero time Thu Aug 23 17:57:29 2018
cache hit (direct)  1029
cache hit (preprocessed)   8
cache miss32
cache hit rate 97.01 %
called for link   37
called for preprocessing1900
cache file missing 1
unsupported code directive 3
no input file704
cleanups performed 0
files in cache 73615
cache size   1.8 GB
max cache size  20.0 GB


-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/765a1348-d849-43e9-aa93-2fd0dbf2cb26%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: How to ccache kernel compilations

2018-08-23 Thread Marcus Linsner
On Thursday, August 23, 2018 at 4:27:13 PM UTC+2, Marcus Linsner wrote:
> I'm trying to use ccache to compile kernel(s) but after 1k files of cache 
> miss, I see only 8% hit rate, even if I keep the compilation dir 
> ('linux-obj'), do a clean (kernel Makefile's clean, not rpm's clean) and 
> re-issue the just-build again after a ccache -z (to clear ccache stats).
ok I'm on to something: it's the .config !
If I'm using default .config (aka in the source folder `make mrproper; make 
menuconfig`, Save then Exit) then copy that .config into ../linux-obj/ and then 
execute this twice:
$ time make clean;ccache -z; time make -j18
I get ccache hit direct over 90% ! which is how it should be.

I'll post again if I find out exactly which options in the .config are the 
ccache-busting culprits

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/87524cfd-1603-4269-a177-9525f0e6e9fe%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: How to ccache kernel compilations

2018-08-23 Thread Marcus Linsner
On Thursday, August 23, 2018 at 4:36:51 PM UTC+2, Marcus Linsner wrote:
> Quick question: 
> If kernel-4.14.57/linux-obj/Makefile  is being regenerated on every build 
> (even if `linux-obj` dir is being kept between successing builds) does that 
> automatically cause `make` to rebuild everything and somehow invalidate 
> ccache?

to answer my own question: no effect even if I stop it from being regenerated 
(eg. touched)

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/7b1bd6d3-78ae-4a8f-8366-015b138a9b68%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] How to ccache kernel compilations

2018-08-23 Thread Marcus Linsner
I'm trying to use ccache to compile kernel(s) but after 1k files of cache miss, 
I see only 8% hit rate, even if I keep the compilation dir ('linux-obj'), do a 
clean (kernel Makefile's clean, not rpm's clean) and re-issue the just-build 
again after a ccache -z (to clear ccache stats).

So, just installing ccache via `sudo dnf install ccache` and running a new 
terminal, one should have ccache in PATH, like:
$ which gcc
/usr/lib64/ccache/gcc
$ which cc
/usr/lib64/ccache/cc
etc.
This causes the kernel compilation to always use ccache (since it increases 
cache miss counter), however it's almost always all cache miss! (as I said, 
even if I keep the obj dir and just `make clean` inside it.
What am I missing here?

I'm using https://github.com/QubesOS/qubes-linux-kernel

cache directory /home/user/.ccache
primary config  /home/user/.ccache/ccache.conf
secondary config  (readonly)/etc/ccache.conf
stats zero time Thu Aug 23 16:10:18 2018
cache hit (direct)88
cache hit (preprocessed)   2
cache miss  1026
cache hit rate  8.06 %
called for link   22
called for preprocessing2052
unsupported code directive 2
no input file177
cleanups performed 0
files in cache  6730
cache size 148.4 MB
max cache size  20.0 GB


next compilation, after a `ccache -z` and kernel Makefile's `make clean`:
cache directory /home/user/.ccache
primary config  /home/user/.ccache/ccache.conf
secondary config  (readonly)/etc/ccache.conf
stats zero time Thu Aug 23 16:21:17 2018
cache hit (direct)   105
cache hit (preprocessed)   2
cache miss  1015
cache hit rate  9.54 %
called for link   22
called for preprocessing2047
unsupported code directive 3
no input file295
cleanups performed 0
files in cache  9859
cache size 217.9 MB
max cache size  20.0 GB

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/05aae458-cb71-43e0-8adf-85cdc2243614%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] "Sensors plugin" in dom0 generates 2 "audit:" dmesg messages on every temperature refresh

2018-08-22 Thread Marcus Linsner
"Sensors plugin" is an xfce4-panel plugin which shows the CPU(and SSD) 
temperatures in the panel. (eg. RMB on panel, Panel->Add New Items...->Search: 
->Sensor plugin)

Its default refresh is 60 seconds. I've set it to 5. But I want it on 1 second, 
however this means it would generate 2 dmesg audit messages every second AND 
they are flushed to the disk(judging by the case HDD led flashing).

[   93.223814] audit: type=1100 audit(1534971421.712:183): pid=3748 uid=1000 
auid=1000 ses=2 msg='op=PAM:authentication grantors=pam_localuser acct="root" 
exe="/usr/sbin/userhelper" hostname=? addr=? terminal=? res=success'
[   93.223828] audit: type=1101 audit(1534971421.712:184): pid=3748 uid=1000 
auid=1000 ses=2 msg='op=PAM:accounting grantors=pam_permit acct="root" 
exe="/usr/sbin/userhelper" hostname=? addr=? terminal=? res=success'


Is there some way to turn these off? if not all the audit messages.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/5f35b0e0-5d68-481a-857d-afeb0482e121%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: Many [kdmflush] on dom0

2018-08-22 Thread Marcus Linsner
On Wednesday, August 22, 2018 at 12:13:29 AM UTC+2, donoban wrote:
> # ps aux | grep kdmflush | wc
> # 157
I got 71 and my uptime is 3min. (Qubes OS R4.0)

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/3d1608f5-14d0-4586-b143-08e19b02b9ff%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: Both dVM gnome-terminals are not launching

2018-08-20 Thread Marcus Linsner
On Tuesday, August 21, 2018 at 2:28:53 AM UTC+2, John S.Recdep wrote:
> On 08/16/2018 03:09 AM, Marcus Linsner wrote:
> > On Friday, June 1, 2018 at 11:31:14 PM UTC+2, qube...-...@public.gmane.org 
> > wrote:
> >> The Qubes docs at:
> >>
> >> https://www.qubes-os.org/doc/dispvm-customization/
> >>
> >> note the following for disposable vms:
> >>
> >> __
> >>
> >> Note that currently only applications whose main process keeps running 
> >> until you close the application (i.e. do not start a background process 
> >> instead) will work. One of known examples of incompatible applications 
> >> is GNOME Terminal (shown on the list as “Terminal”). Choose different 
> >> terminal emulator (like XTerm) instead.
> > 
> > Also nautilus (shown on the list as "Files") even though its main process 
> > (at least when run from another terminal) doesn't return (like 
> > gnome-terminal does) until its window is closed (actually 11 seconds after 
> > its window is closed: try "time nautilus; echo returned" and alt+f4 the 
> > window as soon as it appears - shows like 13 seconds then "returned"). Can 
> > anyone explain?
> > 
> 
> Can you rewrite this, if it is not solved?
> 
> What are you trying to do ?Open a dispVM  and you want it to work
> when it isn'tand/or  it's too slow ..   your question isn't clear

In a DispVM, when you run xterm (which does work/open), then type "time 
nautilus; echo returned", it works, that is, it opens the 'Files' window and in 
xterm you see that the command 'time nautilus' hasn't returned(until 11 seconds 
AFTER you close the nautilus window - which almost makes sense, but why the 
delay? that's one question)! which means nautilus doesn't seem to run as a 
background process(like 'gnome-terminal' does), which means that when you 
straight choose the 'Files' application in the fedora 28 disposable vm menu, it 
should open the window, and yet it doesn't: it starts a dispVM then halts, 
without opening any windows (just like it does when you try 
'gnome-terminal'(shown as 'Terminal' in the app list) - ok, it's almost the 
same, at least with 'gnome-terminal' you do get to see its window appear for 
like 1 second before it auto closes (and all this I understand, it's explained 
in that quoted section).

So the following doesn't seem to apply to nautilus, then is something else 
going on ?
> Note that currently only applications whose main process keeps running
> until you close the application (i.e. do not start a background process
> instead) will work. One of known examples of incompatible applications
> is GNOME Terminal (shown on the list as “Terminal”). Choose different
> terminal emulator (like XTerm) instead. 

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/d83a5ecc-4d10-48bf-b068-eca5f574a66b%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Is the hostname "_gateway" (on systemd linuxes) a security issue?

2018-08-18 Thread Marcus Linsner
Since systemd exposes the hostname "_gateway" (ie. `ping _gateway`) on systemd 
OSes, does this pose any security risk if any website decides to use that 
hostname to access your router?

On appVMs this apparently can't be an issue, because _gateway points to a 10. 
IP class, in fact, it point to sys-firewall's IP (assuming sys-firewall is the 
net VM set for that appVM in its settings).

On a vanilla Linux though, it seems that websites could access the router by 
using the _gateway hostname. Does anyone know if this can be done? It'd be 
kinda lame if so... and I can only imagine the attacks that could be performed.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/b707a481-d786-4bb3-b62b-bebd962208f1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] How to open 'green' URL's in a new red window?

2018-08-17 Thread Marcus Linsner
On Friday, August 17, 2018 at 3:09:33 PM UTC+2, awokd wrote:
> On Fri, August 17, 2018 4:46 am, Marcus Linsner wrote:
> > On Monday, February 16, 2015 at 11:00:29 AM UTC+1, Laszlo Zrubecz wrote:
> >
> >> On 02/16/15 10:53, kerste...@gmail.com wrote:
> >>
> >>> Hello,
> >>>
> >>>
> >>> I have the document D1 with the URL1 inside the green Domain.
> >>>
> >>>
> >>> If I click on this domain, than the green Domain with the firefox is
> >>> starting...
> >>>
> >>> Can I define, that all URL's, which get opened by clicking on the
> >>> URL, are opend in another domain with the appropiate web-security
> >>> level, e.g. red?
> >>
> >> You can define it in OS level (default applications in GUI)
> >> You can use qvm-open-in-vm or qvm-open-in-dvm commands to open new
> >> links...
> >>
> >>
> >> --
> >> Zrubi
> >>
> >
> > Has anyone added a new menu entry such as "Open Link in New qube VM" to
> > Firefox's context menu, maybe under "Open Link in New Tab" for example?
> > If not, I'll post a link to it when I've finished it (would require
> > recompiling firefox btw - and I'm still learning how to do it under
> > Fedora 28) because I really need something like this in order to open
> > links from my google-search-VM into other VM(s).
> 
> I've seen a couple versions of this. One was discussed on qubes-devel a
> few months ago, but forget where I saw the second...

Was this the second version: 
https://groups.google.com/d/msg/qubes-devel/fsIAQO1xFkU/u1C61kxCBgAJ ?
that is: https://github.com/raffaeleflorio/qubes-url-redirector
I ask because it says: "Furthermore, through context menu entries, you can open 
a specific link in a custom way. Currently you can open links in: DVM, a 
default-VM, a specific VM and in this VM."

I haven't checked it yet, but sounds great!

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/f6100492-4c1a-488a-b1c3-d2195c1f1bb1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] about [Dom0] Session and Startup - Application Autostart items

2018-08-17 Thread Marcus Linsner
All the app links in xfce4's "[Dom0] Session and Startup" window under the tab 
"Application Autostart" (see screenshot) cannot be Edit-ed which is probably 
because they reside in /etc/xdg/autostart/ as *.desktop files; another way to 
see what command they execute is looking at the tooltip shown by hovering the 
mouse on them.

I needed to see what's the command for that blue "Q" in systray(aka 
Notification Area) because it went away after some dialog popped up.

The answer is: it's one of the "Domains Tray" items(there are two) whose 
command is:
$ python3 -mqui.tray.domains &
(added the "&" to let it go into background for when running it inside the dom0 
terminal; without the "&", Ctrl+Z then "$ bg" also works)

This post was supposed to be a question but before posting it I've figured it 
out, but I'm still posting it just in case it might be useful to someone or 
even future me.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/44c9cf27-4861-40d2-b9db-b620988f026a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] How to open "green" URL's in a new red window?

2018-08-16 Thread Marcus Linsner
On Monday, February 16, 2015 at 11:00:29 AM UTC+1, Laszlo Zrubecz wrote:
> On 02/16/15 10:53, kerste...@gmail.com wrote:
> > Hello,
> > 
> > I have the document D1 with the URL1 inside the green Domain.
> > 
> > If I click on this domain, than the green Domain with the firefox is 
> > starting...
> > 
> > Can I define, that all URL's, which get opened by clicking on the URL, are 
> > opend in another domain with the appropiate web-security level, e.g. red?
> 
> You can define it in OS level (default applications in GUI)
> You can use qvm-open-in-vm or qvm-open-in-dvm commands to open new links...
> 
> 
> -- 
> Zrubi

Has anyone added a new menu entry such as "Open Link in New qube VM" to 
Firefox's context menu, maybe under "Open Link in New Tab" for example? If not, 
I'll post a link to it when I've finished it (would require recompiling firefox 
btw - and I'm still learning how to do it under Fedora 28) because I really 
need something like this in order to open links from my google-search-VM into 
other VM(s).

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/d170b474-3680-414e-83f9-7733fbc75270%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: Incredible HD thrashing on 4.0

2018-08-16 Thread Marcus Linsner
On Thursday, August 16, 2018 at 10:06:54 PM UTC+2, Brendan Hoar wrote:
> On Thursday, August 16, 2018 at 3:21:27 PM UTC-4, Marcus Linsner wrote:
> > The good news is that I've realized that the OOM triggering was legit: I 
> > had firefox set to use 12 cores at once and 14GiB of RAM was clearly not 
> > enough! (8 and no ccache was good though - did compile it twice like so) 
> > 
> > The bad news is that I still don't know why the disk-read thrashing was 
> > happening for me, but I will default to blame the OOM (even though no swap 
> > was active, ie. I swapoff-ed the swap partition earlier) due to previous 
> > experience with OOM triggering on bare-metal hardware: I seem to remember 
> > SSD disk activity led being full-on during an impending OOM and everything 
> > freezing!
> 
> Maybe this applies:
> 
> https://askubuntu.com/questions/432809/why-is-kswapd0-running-on-a-computer-with-no-swap
> 
> [[if kswapd0 is taking any CPU and you do not have swap, the system is nearly 
> out of RAM and is trying to deal with the situation by (in practise) swapping 
> pages from executables. The correct fix is to reduce workload, add swap or 
> (preferably) install more RAM. Adding swap will improve performance because 
> kernel will have more options about what to swap to disk. Without swap the 
> kernel is practically forced to swap application code.]]
> 
> This could be a reason you only see reads hammering the drive, maybe?
> 
> Also worth remembering: every read is decrypting block(s) which takes some 
> CPU (even on systems with AES-NI support).
> 
> Brendan

Thank you Brendan! The following comment(from the webpage that you linked) 
explained the constant disk-reading best for me:

"For example, consider a case where you have zero swap and system is nearly 
running out of RAM. The kernel will take memory from e.g. Firefox (it can do 
this because Firefox is running executable code that has been loaded from disk 
- the code can be loaded from disk again if needed). If Firefox then needs to 
access that RAM again N seconds later, the CPU generates "hard fault" which 
forces Linux to free some RAM (e.g. take some RAM from another process), load 
the missing data from disk and then allow Firefox to continue as usual. This is 
pretty similar to normal swapping and kswapd0 does it.  " - Mikko Rantalainen 
Feb 15 at 13:08

$ sysctl vm.swappiness
vm.swappiness = 60

In retrospect, I apologize for hijacking this thread, because it now appears to 
me that my issue is totally different from the OP(even though the subject still 
applies):

On Friday, August 10, 2018 at 9:02:31 PM UTC+2, Kelly Dean wrote:
> Has anybody else used both Qubes 3.2 and 4.0 on a system with a HD, not SSD? 
> Have you noticed the disk thrashing to be far worse under 4.0? I suspect it 
> might have something to do with the new use of LVM combining snapshots with 
> thin provisioning.
> 
> The problem seems to be triggered by individual qubes doing ordinary bursts 
> of disk access, such as loading a program or accessing swap, which would 
> normally take just a few seconds on Qubes 3.2, but dom0 then massively 
> multiplies that I/O on Qubes 4.0, leading to disk thrashing that drags on for 
> minutes at a time, and in some cases, more than an hour.
> 
> iotop in dom0 says the thrashing procs are e.g. [21.xvda-0] and [21.xvda-1], 
> reading the disk at rates ranging from 10 to 50 MBps (max throughput of the 
> disk is about 100). At this rate, for how prolonged the thrashing is, it 
> could have read and re-read the entire virtual disk multiple times over, so 
> there's something extremely inefficient going on.
> 
> Is there any solution other than installing a SSD? I'd prefer not to have to 
> add hardware to solve a software performance regression.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/c184a781-3883-443a-b719-6b6817a4de7d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: Incredible HD thrashing on 4.0

2018-08-16 Thread Marcus Linsner
On Thursday, August 16, 2018 at 8:03:52 PM UTC+2, Marcus Linsner wrote:
> On Thursday, August 16, 2018 at 7:50:14 PM UTC+2, Marcus Linsner wrote:
> > On Thursday, August 16, 2018 at 7:35:26 PM UTC+2, Marcus Linsner wrote:
> > > $ cat /proc/meminfo
> > > MemTotal:7454500 kB
> > > MemFree: 5635088 kB
> > > MemAvailable:6574676 kB
> > > Buffers:   53832 kB
> > > Cached:  1094368 kB
> > > SwapCached:0 kB
> > > Active:   724832 kB
> > > Inactive: 747696 kB
> > > Active(anon): 233816 kB
> > > Inactive(anon):95768 kB
> > > Active(file): 491016 kB
> > > Inactive(file):   651928 kB
> > > Unevictable:   73568 kB
> > > Mlocked:   73568 kB
> > > SwapTotal: 0 kB
> > > SwapFree:  0 kB
> > > Dirty:   292 kB
> > > Writeback: 0 kB
> > > AnonPages:398016 kB
> > > Mapped:54320 kB
> > > Shmem:  5256 kB
> > > Slab: 134680 kB
> > > SReclaimable:  74124 kB
> > > SUnreclaim:60556 kB
> > > KernelStack:4800 kB
> > > PageTables:10524 kB
> > > NFS_Unstable:  0 kB
> > > Bounce:0 kB
> > > WritebackTmp:  0 kB
> > > CommitLimit: 3727248 kB
> > > Committed_AS:1332236 kB
> > > VmallocTotal:   34359738367 kB
> > > VmallocUsed:   0 kB
> > > VmallocChunk:  0 kB
> > > HardwareCorrupted: 0 kB
> > > AnonHugePages: 0 kB
> > > ShmemHugePages:0 kB
> > > ShmemPmdMapped:0 kB
> > > CmaTotal:  0 kB
> > > CmaFree:   0 kB
> > > HugePages_Total:   0
> > > HugePages_Free:0
> > > HugePages_Rsvd:0
> > > HugePages_Surp:0
> > > Hugepagesize:   2048 kB
> > > DirectMap4k:  327644 kB
> > > DirectMap2M:14008320 kB
> > > DirectMap1G:   0 kB
> > 
> > I resumed the firefox compilation and noticed that the memory jumped back 
> > to 14GB again - I was sure it was more than that 7.4GB before:
> > 
> > $ cat /proc/meminfo 
> > MemTotal:   14003120 kB
> > MemFree: 4602448 kB
> > MemAvailable:6622252 kB
> > Buffers:  186220 kB
> > Cached:  1986192 kB
> > SwapCached:0 kB
> > Active:  7482024 kB
> > Inactive:1448656 kB
> > Active(anon):6667828 kB
> > Inactive(anon):95780 kB
> > Active(file): 814196 kB
> > Inactive(file):  1352876 kB
> > Unevictable:   73568 kB
> > Mlocked:   73568 kB
> > SwapTotal: 0 kB
> > SwapFree:  0 kB
> > Dirty:306392 kB
> > Writeback:  4684 kB
> > AnonPages:   6811888 kB
> > Mapped:   199164 kB
> > Shmem:  5340 kB
> > Slab: 239524 kB
> > SReclaimable: 177620 kB
> > SUnreclaim:61904 kB
> > KernelStack:5968 kB
> > PageTables:28612 kB
> > NFS_Unstable:  0 kB
> > Bounce:0 kB
> > WritebackTmp:  0 kB
> > CommitLimit: 7001560 kB
> > Committed_AS:8571548 kB
> > VmallocTotal:   34359738367 kB
> > VmallocUsed:   0 kB
> > VmallocChunk:  0 kB
> > HardwareCorrupted: 0 kB
> > AnonHugePages: 0 kB
> > ShmemHugePages:0 kB
> > ShmemPmdMapped:0 kB
> > CmaTotal:  0 kB
> > CmaFree:   0 kB
> > HugePages_Total:   0
> > HugePages_Free:0
> > HugePages_Rsvd:0
> > HugePages_Surp:0
> > Hugepagesize:   2048 kB
> > DirectMap4k:  327644 kB
> > DirectMap2M:14008320 kB
> > DirectMap1G:   0 kB
> > 
> > 
> > Oh man, I'm hitting that disk thrashing again after just a few minutes: 
> > 202MiB/sec reading, 0.0 writing.
> > 
> > Paused qube, reading stopped.
> > Resumed qube sooner than before and it's still thrashing...
> > 
> > It'a a fedora 28 template-based VM.
> > 
> > I shut down another VM and I thought dom0 crashed because it froze for like 
> > 10 sec before the notification message told me that that VM stopped.
> 
> Ok, I caught kswapd0 at 14% in a 'top' terminal on the offending qube, before 
> the disk thrashing begun(which froze a

[qubes-users] Re: Incredible HD thrashing on 4.0

2018-08-16 Thread Marcus Linsner
On Thursday, August 16, 2018 at 7:50:14 PM UTC+2, Marcus Linsner wrote:
> On Thursday, August 16, 2018 at 7:35:26 PM UTC+2, Marcus Linsner wrote:
> > $ cat /proc/meminfo
> > MemTotal:7454500 kB
> > MemFree: 5635088 kB
> > MemAvailable:6574676 kB
> > Buffers:   53832 kB
> > Cached:  1094368 kB
> > SwapCached:0 kB
> > Active:   724832 kB
> > Inactive: 747696 kB
> > Active(anon): 233816 kB
> > Inactive(anon):95768 kB
> > Active(file): 491016 kB
> > Inactive(file):   651928 kB
> > Unevictable:   73568 kB
> > Mlocked:   73568 kB
> > SwapTotal: 0 kB
> > SwapFree:  0 kB
> > Dirty:   292 kB
> > Writeback: 0 kB
> > AnonPages:398016 kB
> > Mapped:54320 kB
> > Shmem:  5256 kB
> > Slab: 134680 kB
> > SReclaimable:  74124 kB
> > SUnreclaim:60556 kB
> > KernelStack:4800 kB
> > PageTables:10524 kB
> > NFS_Unstable:  0 kB
> > Bounce:0 kB
> > WritebackTmp:  0 kB
> > CommitLimit: 3727248 kB
> > Committed_AS:1332236 kB
> > VmallocTotal:   34359738367 kB
> > VmallocUsed:   0 kB
> > VmallocChunk:  0 kB
> > HardwareCorrupted: 0 kB
> > AnonHugePages: 0 kB
> > ShmemHugePages:0 kB
> > ShmemPmdMapped:0 kB
> > CmaTotal:  0 kB
> > CmaFree:   0 kB
> > HugePages_Total:   0
> > HugePages_Free:0
> > HugePages_Rsvd:0
> > HugePages_Surp:0
> > Hugepagesize:   2048 kB
> > DirectMap4k:  327644 kB
> > DirectMap2M:14008320 kB
> > DirectMap1G:   0 kB
> 
> I resumed the firefox compilation and noticed that the memory jumped back to 
> 14GB again - I was sure it was more than that 7.4GB before:
> 
> $ cat /proc/meminfo 
> MemTotal:   14003120 kB
> MemFree: 4602448 kB
> MemAvailable:6622252 kB
> Buffers:  186220 kB
> Cached:  1986192 kB
> SwapCached:0 kB
> Active:  7482024 kB
> Inactive:1448656 kB
> Active(anon):6667828 kB
> Inactive(anon):95780 kB
> Active(file): 814196 kB
> Inactive(file):  1352876 kB
> Unevictable:   73568 kB
> Mlocked:   73568 kB
> SwapTotal: 0 kB
> SwapFree:  0 kB
> Dirty:306392 kB
> Writeback:  4684 kB
> AnonPages:   6811888 kB
> Mapped:   199164 kB
> Shmem:  5340 kB
> Slab: 239524 kB
> SReclaimable: 177620 kB
> SUnreclaim:61904 kB
> KernelStack:5968 kB
> PageTables:28612 kB
> NFS_Unstable:  0 kB
> Bounce:0 kB
> WritebackTmp:  0 kB
> CommitLimit: 7001560 kB
> Committed_AS:8571548 kB
> VmallocTotal:   34359738367 kB
> VmallocUsed:   0 kB
> VmallocChunk:  0 kB
> HardwareCorrupted: 0 kB
> AnonHugePages: 0 kB
> ShmemHugePages:0 kB
> ShmemPmdMapped:0 kB
> CmaTotal:  0 kB
> CmaFree:   0 kB
> HugePages_Total:   0
> HugePages_Free:0
> HugePages_Rsvd:0
> HugePages_Surp:0
> Hugepagesize:   2048 kB
> DirectMap4k:  327644 kB
> DirectMap2M:14008320 kB
> DirectMap1G:   0 kB
> 
> 
> Oh man, I'm hitting that disk thrashing again after just a few minutes: 
> 202MiB/sec reading, 0.0 writing.
> 
> Paused qube, reading stopped.
> Resumed qube sooner than before and it's still thrashing...
> 
> It'a a fedora 28 template-based VM.
> 
> I shut down another VM and I thought dom0 crashed because it froze for like 
> 10 sec before the notification message told me that that VM stopped.

Ok, I caught kswapd0 at 14% in a 'top' terminal on the offending qube, before 
the disk thrashing begun(which froze all terminals too) and then the only 
process at 100% after disk thrashing stopped! and here's the continuation of 
the log, btw the thrashing only stopped after OOM killed the rustc 
process(which, my guess was triggeding kswapd0 to use 100% cpu or what):

[ 6871.435899] systemd-coredum: 4 output lines suppressed due to ratelimiting
[ 6871.485869] audit: type=1130 audit(1534438842.909:179): pid=1 uid=0 
auid=4294967295 ses=4294967295 msg='unit=systemd-logind comm="systemd" 
exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed'
[ 6871.486357] audit: type=1130 audit(1534438842.910:180): pid=1 uid=0 
auid=4294967295 ses=429496729

[qubes-users] Re: Incredible HD thrashing on 4.0

2018-08-16 Thread Marcus Linsner
On Thursday, August 16, 2018 at 7:35:26 PM UTC+2, Marcus Linsner wrote:
> $ cat /proc/meminfo
> MemTotal:7454500 kB
> MemFree: 5635088 kB
> MemAvailable:6574676 kB
> Buffers:   53832 kB
> Cached:  1094368 kB
> SwapCached:0 kB
> Active:   724832 kB
> Inactive: 747696 kB
> Active(anon): 233816 kB
> Inactive(anon):95768 kB
> Active(file): 491016 kB
> Inactive(file):   651928 kB
> Unevictable:   73568 kB
> Mlocked:   73568 kB
> SwapTotal: 0 kB
> SwapFree:  0 kB
> Dirty:   292 kB
> Writeback: 0 kB
> AnonPages:398016 kB
> Mapped:54320 kB
> Shmem:  5256 kB
> Slab: 134680 kB
> SReclaimable:  74124 kB
> SUnreclaim:60556 kB
> KernelStack:4800 kB
> PageTables:10524 kB
> NFS_Unstable:  0 kB
> Bounce:0 kB
> WritebackTmp:  0 kB
> CommitLimit: 3727248 kB
> Committed_AS:1332236 kB
> VmallocTotal:   34359738367 kB
> VmallocUsed:   0 kB
> VmallocChunk:  0 kB
> HardwareCorrupted: 0 kB
> AnonHugePages: 0 kB
> ShmemHugePages:0 kB
> ShmemPmdMapped:0 kB
> CmaTotal:  0 kB
> CmaFree:   0 kB
> HugePages_Total:   0
> HugePages_Free:0
> HugePages_Rsvd:0
> HugePages_Surp:0
> Hugepagesize:   2048 kB
> DirectMap4k:  327644 kB
> DirectMap2M:14008320 kB
> DirectMap1G:   0 kB

I resumed the firefox compilation and noticed that the memory jumped back to 
14GB again - I was sure it was more than that 7.4GB before:

$ cat /proc/meminfo 
MemTotal:   14003120 kB
MemFree: 4602448 kB
MemAvailable:6622252 kB
Buffers:  186220 kB
Cached:  1986192 kB
SwapCached:0 kB
Active:  7482024 kB
Inactive:1448656 kB
Active(anon):6667828 kB
Inactive(anon):95780 kB
Active(file): 814196 kB
Inactive(file):  1352876 kB
Unevictable:   73568 kB
Mlocked:   73568 kB
SwapTotal: 0 kB
SwapFree:  0 kB
Dirty:306392 kB
Writeback:  4684 kB
AnonPages:   6811888 kB
Mapped:   199164 kB
Shmem:  5340 kB
Slab: 239524 kB
SReclaimable: 177620 kB
SUnreclaim:61904 kB
KernelStack:5968 kB
PageTables:28612 kB
NFS_Unstable:  0 kB
Bounce:0 kB
WritebackTmp:  0 kB
CommitLimit: 7001560 kB
Committed_AS:8571548 kB
VmallocTotal:   34359738367 kB
VmallocUsed:   0 kB
VmallocChunk:  0 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
ShmemHugePages:0 kB
ShmemPmdMapped:0 kB
CmaTotal:  0 kB
CmaFree:   0 kB
HugePages_Total:   0
HugePages_Free:0
HugePages_Rsvd:0
HugePages_Surp:0
Hugepagesize:   2048 kB
DirectMap4k:  327644 kB
DirectMap2M:14008320 kB
DirectMap1G:   0 kB


Oh man, I'm hitting that disk thrashing again after just a few minutes: 
202MiB/sec reading, 0.0 writing.

Paused qube, reading stopped.
Resumed qube sooner than before and it's still thrashing...

It'a a fedora 28 template-based VM.

I shut down another VM and I thought dom0 crashed because it froze for like 10 
sec before the notification message told me that that VM stopped.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/a0e7f45a-29fa-4330-ab43-eb4f31511bce%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: Incredible HD thrashing on 4.0

2018-08-16 Thread Marcus Linsner
On Friday, August 10, 2018 at 9:02:31 PM UTC+2, Kelly Dean wrote:
> Has anybody else used both Qubes 3.2 and 4.0 on a system with a HD, not SSD? 
> Have you noticed the disk thrashing to be far worse under 4.0? I suspect it 
> might have something to do with the new use of LVM combining snapshots with 
> thin provisioning.
> 
> The problem seems to be triggered by individual qubes doing ordinary bursts 
> of disk access, such as loading a program or accessing swap, which would 
> normally take just a few seconds on Qubes 3.2, but dom0 then massively 
> multiplies that I/O on Qubes 4.0, leading to disk thrashing that drags on for 
> minutes at a time, and in some cases, more than an hour.
> 
> iotop in dom0 says the thrashing procs are e.g. [21.xvda-0] and [21.xvda-1], 
> reading the disk at rates ranging from 10 to 50 MBps (max throughput of the 
> disk is about 100). At this rate, for how prolonged the thrashing is, it 
> could have read and re-read the entire virtual disk multiple times over, so 
> there's something extremely inefficient going on.
> 
> Is there any solution other than installing a SSD? I'd prefer not to have to 
> add hardware to solve a software performance regression.

Interestingly, I've just encountered this thrashing, but on SSD(it's just 
reading 192MiB/sec constantly), Qubes R4.0 up to date, inside a qube while 
compiling firefox: typing in any of 3 of its terminal windows does not even 
echo anything and the firefox compilation terminal is frozen; the swap (of 1G) 
was turned off a while ago (via swapoff); I used Qube Manager to Pause the 
offending cube and the thrashing stopped. I don't see much on logs.

Ok so I resumed the qube, the thrashing resumed for a few seconds then stopped 
and all terminals were alive again (I can type into them). The log spewed some 
new things (since the updatedb audit which was last while Paused), I'm 
including some long lines from before, note that the log after the unpause 
starts from "[ 6862.846945] INFO: rcu_sched self-detected stall on CPU", as 
follows:


[0.00] Linux version 4.14.57-1.pvops.qubes.x86_64 (user@build-fedora4) 
(gcc version 6.4.1 20170727 (Red Hat 6.4.1-1) (GCC)) #1 SMP Mon Jul 23 16:28:54 
UTC 2018
[0.00] Command line: root=/dev/mapper/dmroot ro nomodeset console=hvc0 
rd_NO_PLYMOUTH rd.plymouth.enable=0 plymouth.enable=0 nopat

...

[ 2769.581919] audit: type=1101 audit(1534434741.005:133): pid=10290 uid=1000 
auid=1000 ses=1 msg='op=PAM:accounting grantors=pam_unix acct="user" 
exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/3 res=success'
[ 2769.582396] audit: type=1123 audit(1534434741.005:134): pid=10290 uid=1000 
auid=1000 ses=1 msg='cwd="/home/user" cmd=737761706F202F6465762F7876646331 
terminal=pts/3 res=success'
[ 2769.582525] audit: type=1110 audit(1534434741.006:135): pid=10290 uid=0 
auid=1000 ses=1 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" 
exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/3 res=success'
[ 2769.583384] audit: type=1105 audit(1534434741.007:136): pid=10290 uid=0 
auid=1000 ses=1 msg='op=PAM:session_open 
grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix 
acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/3 
res=success'
[ 2776.388700] audit: type=1106 audit(1534434747.812:137): pid=10290 uid=0 
auid=1000 ses=1 msg='op=PAM:session_close 
grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix 
acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/3 
res=success'
[ 2776.388735] audit: type=1104 audit(1534434747.812:138): pid=10290 uid=0 
auid=1000 ses=1 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" 
exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/3 res=success'
[ 4093.008056] audit: type=1116 audit(1534436064.432:139): pid=29167 uid=0 
auid=4294967295 ses=4294967295 msg='op=add-group id=982 
exe="/usr/sbin/groupadd" hostname=? addr=? terminal=? res=success'
[ 4093.030620] audit: type=1132 audit(1534436064.454:140): pid=29167 uid=0 
auid=4294967295 ses=4294967295 msg='op=add-shadow-group id=982 
exe="/usr/sbin/groupadd" hostname=? addr=? terminal=? res=success'
[ 4093.304708] audit: type=1130 audit(1534436064.728:141): pid=1 uid=0 
auid=4294967295 ses=4294967295 msg='unit=run-rfbdacad57c5f4bc183d36a7c402c9ae7 
comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? 
res=success'
[ 4094.576065] audit: type=1130 audit(1534436065.999:142): pid=1 uid=0 
auid=4294967295 ses=4294967295 msg='unit=man-db-cache-update comm="systemd" 
exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[ 4094.576138] audit: type=1131 audit(1534436065.999:143): pid=1 uid=0 
auid=4294967295 ses=4294967295 msg='unit=man-db-cache-update comm="systemd" 
exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[ 4094.577822] audit: type=1131 audit(1534436066.001:144): pid=1 uid=0 
auid=4294967295 ses=4294967295 

[qubes-users] Re: Both dVM gnome-terminals are not launching

2018-08-16 Thread Marcus Linsner
On Friday, June 1, 2018 at 11:31:14 PM UTC+2, qube...@go-bailey.com wrote:
> The Qubes docs at:
> 
> https://www.qubes-os.org/doc/dispvm-customization/
> 
> note the following for disposable vms:
> 
> __
> 
> Note that currently only applications whose main process keeps running 
> until you close the application (i.e. do not start a background process 
> instead) will work. One of known examples of incompatible applications 
> is GNOME Terminal (shown on the list as “Terminal”). Choose different 
> terminal emulator (like XTerm) instead.

Also nautilus (shown on the list as "Files") even though its main process (at 
least when run from another terminal) doesn't return (like gnome-terminal does) 
until its window is closed (actually 11 seconds after its window is closed: try 
"time nautilus; echo returned" and alt+f4 the window as soon as it appears - 
shows like 13 seconds then "returned"). Can anyone explain?

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/03d443e6-0bef-45c3-a89a-8e1f6a2da69a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: X470 and IOMMU Groups...

2018-08-16 Thread Marcus Linsner
On Thursday, August 16, 2018 at 1:47:15 PM UTC+2, Marcus Linsner wrote:
> > 
> > I've observed that Qubes installation rarely ever succeeds on X370 
> > motherboards so I believe the same case applies to X470 motherboards with a 
> > higher chance of failure since it is newer. The reason for this I believe 
> > is because these high-end gaming motherboards have alot of 
> > functionalities/bugs that break/interfere with Qubes installation which is 
> > an awful letdown.
> 
> I've had no issues installing Qubes R4.0 several times(for fun) on Asus PRIME 
> X370-A motherboard. 
My bad: I just realized you were talking about X370 not Z370, and I've typoed 
Z370-A above

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/931176ba-4506-4f88-b5b6-5470069d4d94%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: X470 and IOMMU Groups...

2018-08-16 Thread Marcus Linsner
> 
> I've observed that Qubes installation rarely ever succeeds on X370 
> motherboards so I believe the same case applies to X470 motherboards with a 
> higher chance of failure since it is newer. The reason for this I believe is 
> because these high-end gaming motherboards have alot of functionalities/bugs 
> that break/interfere with Qubes installation which is an awful letdown.

I've had no issues installing Qubes R4.0 several times(for fun) on Asus PRIME 
X370-A motherboard. 

As an aside, this motherboard even has a setting to use Z370's Trusted Platform 
Module (TPM) [1] - BIOS setting "Firmware-based Trusted Platform Module 
(fTPM)", so I assume that I can set up Anti Evil Maid in Qubes but haven't 
tried yet. 

[1] shown as Intel® Platform Trust Technology (Intel® PTT) [2] in this link: 
https://www.intel.com/content/www/us/en/products/chipsets/desktop-chipsets/z370.html
[2] PTT to TPM mapped in this link: 
https://www.intel.com/content/www/us/en/support/articles/07452/mini-pcs.html

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/5c2ce6f6-39a5-4259-94ef-3911689a8260%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] Things to do in Qubes before a BIOS update

2018-08-12 Thread Marcus Linsner
On Sunday, August 12, 2018 at 5:36:17 PM UTC+2, Unman wrote:
> On Sat, Aug 11, 2018 at 09:52:16PM -0700, Marcus Linsner wrote:
> > Hello.
> > 
> > I'm attempting to flash a new BIOS (ie. upgrade) and I am greeted by the 
> > BIOS with the following message:
> > 
> > "Important Notice!!!
> > Please back up your Bitlocker recovery key and suspend Bitlocker encryption 
> > in the operating system before updating your BIOS or ME firmware."
> > 
> > Is there something that I need to do in Qubes (R4.0) before updating BIOS 
> > assuming either of the following:
> > 1. I don't have Anti Evil Maid installed
> > 2. I do have AEM installed.
> > 
> > while Secure Boot is Enabled in BIOS and so is TPM (1.3) ?
> > 
> > In the case of point 2 the following info exists:
> > 
> > "Xen/kernel/BIOS/firmware upgrades
> > ==
> > 
> > After Xen, kernel, BIOS, or firmware upgrades, you will need to reboot
> > and enter your disk decryption passphrase even though you can't see your
> > secret. Please note that you will see a `Freshness toekn unsealing failed!`
> > error. It (along with your AEM secrets) will be resealed again automatically
> > later in the boot process (see step 4.a).
> > 
> > Some additional things that can cause AEM secrets and freshness token to
> > fail to unseal (non-exhaustive list):
> > 
> > * changing the LUKS header of the encrypted root partition
> > * modifying the initrd (adding/removing files or just re-generating it)
> > * changing kernel commandline parameters in GRUB"
> > 
> > that is from 
> > https://github.com/QubesOS/qubes-antievilmaid/blob/af4f6160dfd89d126b923c183b5a9cea18b4b1b9/anti-evil-maid/README#L344-L358
> > 
> > 
> > In the case of point 1, what I want to know is whether or not I will still 
> > be able to boot my existing Qubes R4.0 installation after the BIOS update 
> > and if not how can it be fixed? This is the reason for this post.
> > 
> 
> If you have replaced your windows installation completely then I dont
> think you need to do anything in case 1. At least, I have flashed BIOS
> a number of times and not encounterd problems in that situation. ymmv.
> Obviously you should take full backup before doing this.

Thanks Unman. I have upgraded BIOS successfully and there were no issues 
booting Qubes afterwards.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/17e7146b-ccf3-43d2-beaf-84b6c001475d%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Things to do in Qubes before a BIOS update

2018-08-11 Thread Marcus Linsner
Hello.

I'm attempting to flash a new BIOS (ie. upgrade) and I am greeted by the BIOS 
with the following message:

"Important Notice!!!
Please back up your Bitlocker recovery key and suspend Bitlocker encryption in 
the operating system before updating your BIOS or ME firmware."

Is there something that I need to do in Qubes (R4.0) before updating BIOS 
assuming either of the following:
1. I don't have Anti Evil Maid installed
2. I do have AEM installed.

while Secure Boot is Enabled in BIOS and so is TPM (1.3) ?

In the case of point 2 the following info exists:

"Xen/kernel/BIOS/firmware upgrades
==

After Xen, kernel, BIOS, or firmware upgrades, you will need to reboot
and enter your disk decryption passphrase even though you can't see your
secret. Please note that you will see a `Freshness toekn unsealing failed!`
error. It (along with your AEM secrets) will be resealed again automatically
later in the boot process (see step 4.a).

Some additional things that can cause AEM secrets and freshness token to
fail to unseal (non-exhaustive list):

* changing the LUKS header of the encrypted root partition
* modifying the initrd (adding/removing files or just re-generating it)
* changing kernel commandline parameters in GRUB"

that is from 
https://github.com/QubesOS/qubes-antievilmaid/blob/af4f6160dfd89d126b923c183b5a9cea18b4b1b9/anti-evil-maid/README#L344-L358


In the case of point 1, what I want to know is whether or not I will still be 
able to boot my existing Qubes R4.0 installation after the BIOS update and if 
not how can it be fixed? This is the reason for this post.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/e8dcafbf-8820-498b-b5b9-a0664ba083d7%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.