[qubes-users] Re: New installation of Qubes OS stopped booting for no reason?

2018-04-21 Thread billollib


Thanks everybody for the replies. I apologize for the delay in responding -- 
I've been traveling and just got off the plane on the last leg of the trip.

I had not installed any updates in Qubes, afaik, but did install upgrades in 
both Netrunner and Windows.  Yes, my troubleshooting failed also.  No big deal 
-- I'll reinstall when I get a chance.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/a1351be9-b80b-46f5-84e3-8907eb362212%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] "How can I properly manage my system?" or "how do I use Admin API, salt and git or other versioning/distribution mechanisms together"

2018-04-21 Thread Marek Marczykowski-Górecki
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On Fri, Apr 20, 2018 at 11:40:36PM +0200, viq wrote:
> On 18-04-20 23:21:10, Marek Marczykowski-Górecki wrote:
> > On Fri, Apr 20, 2018 at 10:51:38PM +0200, viq wrote:
> > > On 18-04-20 13:51:50, Marek Marczykowski-Górecki wrote:
> > 
> > > Hm, salt has SPM[6], which I need to read a bit more about. On one
> > > hand, it's a native salt tool, so possibly it could work better for
> > > distributing, and more importantly updating states/formulas, but on the
> > > other hand, as far as I'm aware, it doesn't currently have concept of
> > > signing.
> > 
> > This is exactly the reason we use RPM for distribution-provided
> > formulas.
> > I've tried to play with SPM + some wrapper to actually download files
> > (dom0 has no network), but AFAIR it was a bit crazy to do it this way -
> > the only part of SPM that left could be shortened to "tar x"...
> 
> Ah, so you looked at it more than I did. Would it make sense to have
> pretty much just SPM file inside the RPM, and post-install talk with SPM
> to install that, or does it really bring nothing to the table?

> On the other hand, RPMs don't play nice with local modifications...

Does SPM do?

> > BTW each of our formula packages have FORMULA file, so it should be
> > compatible with SPM out of the box, at least in theory.
> > 
> > > > See linked post[1] what changes are required. Normally I'd say, lets
> > > > package it in rpm, but since qrexec policy doesn't support .d
> > > > directories, it may not work that well. In many places we use salt's
> > > > file.prepend to adjust policy files, so maybe use it here too? This
> > > > start being quite complex:
> > > > 1. Salt formula installed (via rpm?) in dom0, to configure management VM
> > > > 2. Management VM running rest of salt formulas to configure other VMs
> > > 
> > > Yeah, this kinda follows what I was thinking. With some work (1) could
> > > be available from Qubes repos ;) I guess with defaults allowing to set
> > > up mgmt-global, mgmt-personal and mgmt-work, with permissions set up as
> > > the names imply?
> > > 
> > > But, being salt-head that I am, what about templating the settings from
> > > pillars? 
> > 
> > I think it is a good idea, but needs some better handling of pillars. We
> > already have topd[13] module to maintain top.sls. If we could have
> > something allowing the user to simply set pillar entry X to value Y
> > (without learning yaml syntax), that would be great. Pillar modules you
> > link below may be the way to go.
> 
> Hm, where are things like labels and other VM settings stored? 

All VM properties are stored in qubes.xml. We do expose some of them as
pillars already (for example qubes:type), but I don't think it's a good
place for something not directly related to VMs.

I'm thinking of pillars like the name of mgmt-global VM. This isn't
something that belongs to some particular VM (in qubes.xml), especially
when said mgmt-global VM doesn't exist yet.
I was hoping that some of existing pillar modules would support
something with user friendly key-value interface, including:
 - listing available keys (maybe even with some description?)
 - getting and setting values
 - a GUI, or interface to integrate with some

While a script that would handle yaml file wouldn't be horribly long,
I'd guess someone have done that already.

> Maybe it
> would be possible to piggy-back on that? Even if code would be needed,
> pillars just like top system are "just another python file" that IIRC
> can even be distributed inside SPMs.
>  
> > > No, I'm not convinced whether one long yaml is better than
> > > multitude of tiny files... But this could be another way to manage the
> > > whole thing. Some examples of what it could look like are pillar
> > > examples from rspamd-formula[7], salt-formula[8] and shorewall-formula[9]
> > > 
> > > And of course there are different ways to manage pillars than one long
> > > yaml, but this is the most common way. [10] [11] [12]
> > > 
> > > > [1] https://www.qubes-os.org/news/2017/06/27/qubes-admin-api/
> > > > [2] https://github.com/QubesOS/qubes-mgmt-salt-dom0-qvm/
> > > > [3] https://github.com/QubesOS/qubes-mgmt-salt-dom0-virtual-machines/
> > > > [4] https://github.com/QubesOS/qubes-infrastructure/
> > > > [5] https://github.com/QubesOS/qubes-mgmt-salt
> > > 
> > > [6] https://docs.saltstack.com/en/latest/topics/spm/index.html
> > > [7] 
> > > https://github.com/saltstack-formulas/rspamd-formula/blob/master/pillar.example
> > > [8] 
> > > https://github.com/saltstack-formulas/salt-formula/blob/master/pillar.example
> > > [9] 
> > > https://github.com/saltstack-formulas/shorewall-formula/blob/master/pillar.example
> > > [10] https://docs.saltstack.com/en/latest/ref/pillar/all/
> > > [11] https://docs.saltstack.com/en/latest/ref/sdb/all/index.html
> > > [12] https://docs.saltstack.com/en/latest/ref/renderers/all/index.html
> > 
> > [13] https://github.com/QubesOS/qubes-mgmt-salt-base-topd/

- -- 
Best 

[qubes-users] Re: (Qubes OS 4.0) Anyone that built a Ubuntu Xenial template successfully?

2018-04-21 Thread sdika
On Wednesday, April 18, 2018 at 4:00:19 PM UTC-4, Christoffer Lilja wrote:
> I've tried to build a Ubuntu Xenial template on Qubes OS 4.0 but I get this 
> error:
> dpkg-source: info: using options from core-agent-linux/debian/source/options: 
> --extend-diff-ignore=(^|/)(.git/.*)$ --extend-diff-ignore=(^|/)(deb/.*)$ 
> --extend-diff-ignore=(^|/)(pkgs/.*)$ --extend-diff-ignore=(^|/)(rpm/.*)$
> dpkg-source: info: using source format '3.0 (quilt)'
> dpkg-source: info: building qubes-core-agent using existing 
> ./qubes-core-agent_4.0.24.orig.tar.gz
> dpkg-source: warning: executable mode 0775 of 'qubes-rpc/qvm-open-in-dvm' 
> will not be represented in diff
> dpkg-source: info: local changes detected, the modified files are:
>  core-agent-linux/qubes-rpc/qvm-open-in-dvm
> dpkg-source: error: aborting due to unexpected upstream changes, see 
> /tmp/qubes-core-agent_4.0.24-1+xenialu1.diff.raLTJa
> dpkg-source: info: you can integrate the local changes with dpkg-source 
> --commit
> dpkg-buildpackage: error: dpkg-source -b core-agent-linux gave error exit 
> status 2
> make[2]: *** 
> [/home/user/qubes-builder/qubes-src/builder-debian/Makefile.qubuntu:215: 
> dist-package] Error 2
> make[1]: *** [Makefile.generic:166: packages] Error 1
> make: *** [Makefile:212: core-agent-linux-vm] Error 1
> 
> Does someone know how to solve this?

I also tried and failed with similar errors. I wish there was an update for 
this, it isn't up to date with the documentation. When you pull system build 
from git and run the setup script it won't let you select one template. You 
have to select fedora, debian and salt to get to the next screen where you can 
select ubuntu.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/4bb6fc24-dd76-48da-81a7-35bda1a81ed1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] sd-card reader does not resume from suspend, module error?

2018-04-21 Thread Brian Deery
Greetings mailing list:

I am very happy that 4.0 has been released and am using it for my main
system.

One quirk that might have an easy fix is my SD card.

My Dell Precision 7520 has an internal sd card reader.


I am running the latest (non-experimental) updates for dom0.  I am using
the latest BIOS as well.

according to the logs, it looks like mmc_block, as well as several other
modules have trouble with suspending.  Is there a module to blacklist or
some other easy fix for this?

could it be related to this recent suspend issue?
https://github.com/QubesOS/qubes-issues/issues/3738

Should I be worried about the other modules with errors too?



insert before suspend

Apr 21 15:18:05 dom0 kernel: mmc0: cannot verify signal voltage switch
Apr 21 15:18:05 dom0 kernel: mmc0: new ultra high speed SDR50 SDHC card at
address 
Apr 21 15:18:05 dom0 kernel: mmcblk0: mmc0: SL08G 7.40 GiB
Apr 21 15:18:05 dom0 kernel:  mmcblk0: p1
Apr 21 15:18:05 dom0 dbus-daemon[12182]: [session uid=1000 pid=12182]
Activating via systemd: service name='org.freedesktop.Notifications'
unit='xfce4-notifyd.service' requested by ':1.47'
Apr 21 15:18:05 dom0 systemd[12165]: Starting XFCE notifications service...
Apr 21 15:18:05 dom0 dbus-daemon[12182]: [session uid=1000 pid=12182]
Successfully activated service 'org.freedesktop.Notifications'
Apr 21 15:18:05 dom0 systemd[12165]: Started XFCE notifications service.

remove card

Apr 21 15:21:29 dom0 kernel: mmc0: card  removed

insert again:

Apr 21 15:23:09 dom0 kernel: mmc0: cannot verify signal voltage switch
Apr 21 15:23:09 dom0 kernel: mmc0: new ultra high speed SDR50 SDHC card at
address 
Apr 21 15:23:09 dom0 kernel: mmcblk0: mmc0: SL08G 7.40 GiB
Apr 21 15:23:09 dom0 kernel:  mmcblk0: p1


remove:

Apr 21 15:24:08 dom0 kernel: mmc0: card  removed



With journalctl I see this when I suspend the computer.

Apr 21 15:26:47 dom0 kernel: [ cut here ]
Apr 21 15:26:47 dom0 kernel: WARNING: CPU: 0 PID: 13288 at
/home/user/rpmbuild/BUILD/kernel-4.14.18/linux-4.14.18/kernel/power/suspend_test.c:55
suspend_test_finish+0x6b/0x70
Apr 21 15:26:47 dom0 kernel: Modules linked in: mmc_block loop
ebtable_filter ebtables ip6table_filter ip6_tables snd_hda_codec_hdmi
joydev uvcvideo videobuf2_vmalloc videobuf2_memops vide
Apr 21 15:26:47 dom0 kernel:  int3400_thermal int340x_thermal_zone
intel_hid acpi_thermal_rel sparse_keymap xenfs dm_thin_pool
dm_persistent_data libcrc32c dm_bio_prison dm_crypt rtsx_pci_
Apr 21 15:26:47 dom0 kernel: CPU: 0 PID: 13288 Comm: systemd-sleep Tainted:
G U  W   4.14.18-1.pvops.qubes.x86_64 #1
Apr 21 15:26:47 dom0 kernel: Hardware name: Dell Inc. Precision 7520/
, BIOS 1.10.2 03/09/2018
Apr 21 15:26:47 dom0 kernel: task: 88015443db80 task.stack:
c9000510
Apr 21 15:26:47 dom0 kernel: RIP: e030:suspend_test_finish+0x6b/0x70
Apr 21 15:26:47 dom0 kernel: RSP: e02b:c90005103d28 EFLAGS: 00010282
Apr 21 15:26:47 dom0 kernel: RAX: 0026 RBX: 8208fb53
RCX: 82251e68
Apr 21 15:26:47 dom0 kernel: RDX:  RSI: 0001
RDI: 0201
Apr 21 15:26:47 dom0 kernel: RBP: 5a63 R08: 0006ce1cf04f
R09: 0026
Apr 21 15:26:47 dom0 kernel: R10: 0040 R11: 00028970
R12: 
Apr 21 15:26:47 dom0 kernel: R13: 82251b90 R14: fff0
R15: 0004
Apr 21 15:26:47 dom0 kernel: FS:  71ad037c8180()
GS:88019880() knlGS:
Apr 21 15:26:47 dom0 kernel: CS:  e033 DS:  ES:  CR0:
80050033
Apr 21 15:26:47 dom0 kernel: CR2: 8050c2b0 CR3: 000160084000
CR4: 00042660
Apr 21 15:26:47 dom0 kernel: Call Trace:
Apr 21 15:26:47 dom0 kernel:  suspend_devices_and_enter+0x185/0x7b0
Apr 21 15:26:47 dom0 kernel:  pm_suspend+0x335/0x3a0
Apr 21 15:26:47 dom0 kernel:  state_store+0x72/0xe0
Apr 21 15:26:47 dom0 kernel:  kernfs_fop_write+0x109/0x1a0
Apr 21 15:26:47 dom0 kernel:  __vfs_write+0x33/0x170
Apr 21 15:26:47 dom0 kernel:  ? __audit_syscall_entry+0xae/0x100
Apr 21 15:26:47 dom0 kernel:  vfs_write+0xb0/0x190
Apr 21 15:26:47 dom0 kernel:  SyS_write+0x52/0xc0
Apr 21 15:26:47 dom0 kernel:  do_syscall_64+0x6f/0x180
Apr 21 15:26:47 dom0 kernel:  entry_SYSCALL_64_after_hwframe+0x21/0x86
Apr 21 15:26:47 dom0 kernel: RIP: 0033:0x71ad02c98b50
Apr 21 15:26:47 dom0 kernel: RSP: 002b:7ffd961c8df8 EFLAGS: 0246
ORIG_RAX: 0001
Apr 21 15:26:47 dom0 kernel: RAX: ffda RBX: 0004
RCX: 71ad02c98b50
Apr 21 15:26:47 dom0 kernel: RDX: 0004 RSI: 5989d5601390
RDI: 0004
Apr 21 15:26:47 dom0 kernel: RBP: 5989d5601390 R08: 5989d5601240
R09: 71ad037c8180
Apr 21 15:26:47 dom0 kernel: R10: 5989d5601390 R11: 0246
R12: 0004
Apr 21 15:26:47 dom0 kernel: R13: 0001 R14: 5989d5601160
R15: 71ad02f5e3c0
Apr 21 15:26:47 dom0 kernel: Code: ea 06 69 c2 

Re: [qubes-users] Re: Guide: Monero wallet/daemon isolation w/qubes+whonix

2018-04-21 Thread qubenix
qubenix:
> pauHana:
>> After completing the to VM setups and shutting them down is the intention 
>> then to start monero-wallet-ws and interact with the wallet thru this vm as 
>> per the usual ./monero-wallet-cli?
>>
> 
> Correct. You should be able to run the gui from monero-wallet-ws as
> well, but I haven't tried it myself.
> 
> --
> qubenix
> GPG: B536812904D455B491DCDCDD04BE1E61A3C2E500
> 

Please don't delete the conversation so the next person that has this
trouble can see everything together.

Is your daemon sync'd fully?

-- 
qubenix
GPG: B536812904D455B491DCDCDD04BE1E61A3C2E500

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/043be5f4-fca2-9e47-4845-06d3b1dc3cf7%40riseup.net.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] R4.0: Can't connect to sys-firewall from Standalone HVM (not based on TemplateVM)

2018-04-21 Thread cr33dc0d3r
2018-04-17 23:21 GMT+02:00 awokd :
>
>On Tue, April 17, 2018 7:37 pm, C0d3r Cr33d wrote:
>> Hi awokd,
>>
>>
>> First, thanks for your reply and yes, that what i suggested. However, the
>>  sys-net set a netmask of 255.255.255.255 (32) towards the sys-firewall.
>> So
>> every Qube connected to the sys-firewall (or even sys-net) has to use the
>> 32 netmask.
>
>That's not true for typical use. I have an HVM on 4.0 running right now
>with internet access on 10.137.0.27/24 with default gwy of 10.137.0.6
>(sys-firewall). The corresponding vif in sys-firewall is default at
>10.137.0.6/32. Since every IP the HVM needs to talk to is outside that
>/24, routing works fine with mismatched subnets.
>

Seems like it should work like i tried. Let sys-firewall default, set HVM 
manually to:
IP: 10.137.0.27 (refer to your case), Netmask: 255.255.255.0, Gateway: 
10.137.0.6

and then just open Firefox and have fun. I will try it again. Which OS is your 
HVM running?
 

>
>Where that might be an issue is when you try to get the VMs talking to
>each other, but it sounds like you aren't even getting past sys-firewall?
>

Thats correct. I haven't tried to ping the sys-firewall though, but as i tried 
to ping e.g. google,
i didn't get any response.
 
>
>Not sure if there's some nftables magic that intercepts these inside
>sys-firewall and routes them properly, hopefully someone who understands
>this part better can chime in.

If some one in this group does, I would appreciate it .
 
>
>> In many tutorials, i saw the sys-net providing a netmask of 24
>> (255.255.255.0). These tutorials mostly rely on Qubes R3.2.
>>
>>
>> So my Questions:
>>
>>
>> - Is the sys-net provided netmask differ on Version? R3.2, R4.0?
>
>It was /24 on 3.2, now it's /32.
>
>
Good to know. Let me consider changing the version although it should work with 
4.0 either.
> 
>
>
>> - If not, what does the netmask depend on? When the netmask is set? in
>> the installation or first bootup? - Is it possible to change the sys-net
>> provided netmask persistantly to 24 (255.255.255.0)?
>
>If I understand it right, the VIFs are point-to-point, not shared. So even
>if you could change sys-net's netmask, I don't think it would help what
>you are trying to do.

Good Point.

Thanks for your reply, hopefully others got similar or different ideas on this

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/5dd477c8-9224-4793-a18d-9d77597067b1%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] trouble with whonix-14-dvm

2018-04-21 Thread 'awokd' via qubes-users
On Sat, April 21, 2018 5:23 pm, 'awokd' via qubes-users wrote:

> qvm-features whonix-ws-14-dvm appmenus-dispvm 1

Should be:

qvm-features whonix-ws-dvm-14 appmenus-dispvm 1

...


-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/d4367c7789854157b19e65c66052051a.squirrel%40tt3j2x4k5ycaa5zt.onion.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] Re: Guide: Monero wallet/daemon isolation w/qubes+whonix

2018-04-21 Thread pauHana
How could I test the connection to the other monerod-ws?

One step I have tried is relaunching rc.local on monero-wallet-ws to:

user@host:~$/rw/config/rc.local
2018/04/21 17:22:54 socat[1935] E bind(5, {AF=2 127.0.0.1:18081}, 16): Already 
in use

So it looks like that is running.  I am not sure how to check that the right 
stuff is going into the pipe from monerod-ws?  

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/4de34065-4b37-48c7-8795-f6acc0025675%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] trouble with whonix-14-dvm

2018-04-21 Thread 'awokd' via qubes-users
On Thu, April 19, 2018 2:06 pm, coinshark...@gmail.com wrote:
> oh yes apologies. I made a copy of default whonix-gw and whonix-ws, then
> upgraded as per
> https://www.whonix.org/wiki/Upgrading_Whonix_13_to_Whonix_14
>
>
> so you say whonix-14 is available in the unstable repo? I will try this.
>
> when changing anon-whonix to use netvm and template for whonix-14, tor
> browser works.
>
> when creating a new dvm, tor browser claims not installed.
>
> I wanted to install whonix-14 from a premade template but did not know
> how. only instructions for upgrading from 13 to 14 on whonix.org

Think I got it working with the -14 templates from unstable. In dom0:

sudo qubes-dom0-update --enablerepo=qubes-dom0-unstable
qubes-template-whonix-gw-14
qvm-create sys-whonix-14 --class AppVM --template whonix-gw-14 --label black
qvm-prefs sys-whonix-14 provides_network True

sudo qubes-dom0-update --enablerepo=qubes-dom0-unstable
qubes-template-whonix-ws-14
qvm-features whonix-ws-14 whonix-ws 1
qvm-create whonix-ws-dvm-14 --class AppVM --template whonix-ws-14 --label
green
qvm-features whonix-ws-14-dvm appmenus-dispvm 1
qvm-prefs whonix-ws-dvm-14 template_for_dispvms true
qvm-prefs whonix-ws-dvm-14 netvm sys-whonix-14
qvm-prefs whonix-ws-dvm-14 default_dispvm whonix-ws-dvm-14

I suspect once these templates get released to stable, the salt command
will take care of all that for us.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/d3f66a17fb8f3763fe43a2d678bb4362.squirrel%40tt3j2x4k5ycaa5zt.onion.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: Monero Wallet on Qubes

2018-04-21 Thread pauHana
I too have followed this guide.  I figured out how to enlarge the monerod-ws vm 
available space (for blockchain storage) by enlarging the volume thru dom0:

sudo qvm-volume extend monerod-ws:private 80GB


You can confirm it worked in the monerod-ws by  checking with df-h command and 
confirming the that /dev/xvdX mounted on /rw changed.


My issue now is getting using the setup.  Is one supposed to relaunch 
monero-wallet-ws after completing the guide and interact with the wallel thru 
the ususal ./monero-cli-wallet?  When I do this I get the 

"Error: wallet failed to connect to daemon: http://localhost:18081. Daemon 
either is not started or wrong port was passed.  Please make sure daemon is 
running or change the daemon address using the 'set_daemon' command"

Any ideads?

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/5ac8f601-f30e-4c8b-8c6a-c69d21cc05c3%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] Re: Guide: Monero wallet/daemon isolation w/qubes+whonix

2018-04-21 Thread qubenix
pauHana:
> After completing the to VM setups and shutting them down is the intention 
> then to start monero-wallet-ws and interact with the wallet thru this vm as 
> per the usual ./monero-wallet-cli?
> 

Correct. You should be able to run the gui from monero-wallet-ws as
well, but I haven't tried it myself.

--
qubenix
GPG: B536812904D455B491DCDCDD04BE1E61A3C2E500

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/ae14b912-564e-d523-9633-7f8bf5bf9627%40riseup.net.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: Guide: Monero wallet/daemon isolation w/qubes+whonix

2018-04-21 Thread pauHana
After completing the to VM setups and shutting them down is the intention then 
to start monero-wallet-ws and interact with the wallet thru this vm as per the 
usual ./monero-wallet-cli?

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/b6a1068e-30f2-46c4-abf4-388482e4d96a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] Difficulty after attempted template re-install

2018-04-21 Thread 'awokd' via qubes-users
On Sat, April 21, 2018 1:02 pm, Chris Laprise wrote:
>
> OK, problem may be that 'qubes-core-admin/pull/203' hasn't reached
> stable yet. I think this should be tried after updating from
> qubes*testing.

Looks like Marek just pushed a fix for core-admin-client v4.0.17 to
testing
(https://github.com/marmarek/qubes-core-admin-client/commit/c75c0176dc23dbb50dc1420c2bfe181844e4ae47).
I updated to testing and the bug is gone so --action=reinstall is working
correctly now. Thanks!

And a big thanks to whomever coded that new disk usage widget that came
with the testing updates, I think that's going to save a lot of people a
lot of grief.


-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/489df61b306e109ced92badf164ca844.squirrel%40tt3j2x4k5ycaa5zt.onion.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] Difficulty after attempted template re-install

2018-04-21 Thread Chris Laprise

On 04/21/2018 08:54 AM, Chris Laprise wrote:

On 04/21/2018 07:18 AM, 'awokd' via qubes-users wrote:

On Fri, April 20, 2018 11:38 pm, trueriver wrote:


Is that -root-tmp volume a sign of a bug, if so where?


I am not confident of reproducing the bug, if indeed it is one.


My gut feeling is that it may not enough to make a useful bugrep, but
will do so if you or awokd think I should.

One thought I had is how do I know f I run out of pool space - might 
that

have triggered something like this or should I get an elegant warning?
Certainly my disk space is overcommitted, with the magic of sparse 
files.


Don't think it's pool space related.


Not seeing this as a space issue, but as a possible lvm volume 
organization issue which causes reinstall to abort part way through.




Tried to reproduce this. First installed the minimal template, tested,
then did an --action=reinstall. The menu shortcut for xterm stopped
working. Did a refresh applications in Qube Settings but that also failed
to start the template qube. qvm-run gave me "VM directory does not exist:
/var/lib/qubes/vm-templates/fedora-26-minimal" and ls confirms it does
not. This is probably a bug, possibly
https://github.com/QubesOS/qubes-issues/issues/3169. I'll make a note in
there.


Its definitely not the same as #3169 though it might as well be reported 
there. As I mentioned, the storage layer was updated and its partly to 
implement 3169.


OK, problem may be that 'qubes-core-admin/pull/203' hasn't reached 
stable yet. I think this should be tried after updating from qubes*testing.



--

Chris Laprise, tas...@posteo.net
https://github.com/tasket
https://twitter.com/ttaskett
PGP: BEE2 20C5 356E 764A 73EB  4AB3 1DC4 D106 F07F 1886

--
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/eda2b230-f683-53ec-ffd1-3677f7c24e07%40posteo.net.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] Difficulty after attempted template re-install

2018-04-21 Thread Chris Laprise

On 04/21/2018 07:18 AM, 'awokd' via qubes-users wrote:

On Fri, April 20, 2018 11:38 pm, trueriver wrote:


Is that -root-tmp volume a sign of a bug, if so where?


I am not confident of reproducing the bug, if indeed it is one.


My gut feeling is that it may not enough to make a useful bugrep, but
will do so if you or awokd think I should.

One thought I had is how do I know f I run out of pool space - might that
have triggered something like this or should I get an elegant warning?
Certainly my disk space is overcommitted, with the magic of sparse files.


Don't think it's pool space related.


Not seeing this as a space issue, but as a possible lvm volume 
organization issue which causes reinstall to abort part way through.




Tried to reproduce this. First installed the minimal template, tested,
then did an --action=reinstall. The menu shortcut for xterm stopped
working. Did a refresh applications in Qube Settings but that also failed
to start the template qube. qvm-run gave me "VM directory does not exist:
/var/lib/qubes/vm-templates/fedora-26-minimal" and ls confirms it does
not. This is probably a bug, possibly
https://github.com/QubesOS/qubes-issues/issues/3169. I'll make a note in
there.


Its definitely not the same as #3169 though it might as well be reported 
there. As I mentioned, the storage layer was updated and its partly to 
implement 3169.




At that point, "sudo dnf remove qubes-template-fedora-26-minimal" followed
by "sudo qubes-dom0-update qubes-template-fedora-26-minimal" worked with
no errors and restored proper function.

Next, I tested with the template running. --action=reinstall shutdown the
template before doing its work, but resulted in the same bug as before.

Doing a dnf remove with the template running failed with an error message
about same. I didn't see -root-tmp at any time; not sure what might have
created it.



--

Chris Laprise, tas...@posteo.net
https://github.com/tasket
https://twitter.com/ttaskett
PGP: BEE2 20C5 356E 764A 73EB  4AB3 1DC4 D106 F07F 1886

--
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/c3e4a801-1862-5646-cc1f-a1a9fa8873d1%40posteo.net.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: Qubes 4 on USB Not Rebooting

2018-04-21 Thread Campbell
On Saturday, April 21, 2018 at 12:05:05 AM UTC-7, john wrote:
> On 04/20/18 14:58, Campbell wrote:
> > On Friday, April 20, 2018 at 11:01:16 AM UTC-7, Campbell wrote:
> >> I have a problem with a Qubes 4 installation on USB that will not boot 
> >> after the initial setup. It will restart after initial install, BIOS sees 
> >> the drive as USB Qubes, boots into the configuration loader and eventually 
> >> into the OS. Once there I can do everything and it all works including my 
> >> Windows HVM yesterday.
> >>
> >> But what I thought was a fluke is turning into a real problem.
> >> USB will not boot again if I choose to shut down in the Qubes OS.
> >>
> >> I've now tried this with 2 different size and manufacturer USB drives with 
> >> same results. The BIOS sees either drive as simply a USB drive (instead of 
> >> showing "USB Qubes") and apparently does not see the boot loader any more.
> >>
> >> Please help!
> > 
> > I just now tried a Qubes 3.2 install and have the same problem. Once I 
> > restart after all configuration is done, it never boots again. Computer 
> > BIOS sees the disk but there is nothing to load, even though it rebooted 
> > successfully during the installation.
> > 
> 
> so, 1st you tried to install Q4.0 from USB installation media to another 
> USB drive ?  that failed,  so then you tried to install Q3.2  from one 
> USB drive installtion media  to  another USB drive ?  Is that right ?
> 
> and you've seen and followed this ?
> --
> Installing to a USB drive
> 
> Installing an operating system onto a USB drive can be a convenient and 
> secure method of ensuring that your data is protected. Be advised that a 
> minimum storage of 32 GB is required on the USB drive. This installation 
> process may take longer than an installation on a standard hard disk. 
> The installation process is identical to using a hard disk in 
> conjunction with two exceptions:
> 
>  Select the USB as the storage location for the OS.
> 
>  Leave the option checked to “Automatically configure my Qubes 
> installation to the disk(s) I selected and return me to the main menu”.
> 
> --

There is no option in the install script to "Automatically configure my Qubes 
installation to the disk(s) I selected and return me to the main menu"
When I select my disk, there is an option checked for "Automatically partition 
disk" but nothing like what is written in the documentation.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/a75668d7-9ed2-4ce8-8cd8-e67a44baf63a%40googlegroups.com.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] suspend to ram, r8169, networking in sys-net not working after resume

2018-04-21 Thread argand12


--
Securely sent with Tutanota. Claim your encrypted mailbox today!
https://tutanota.com 

21. Apr 2018 13:16 by i...@maa.bz :


> Hi,
>
>
> On 04/21/2018 01:49 PM, > argan...@tutanota.com 
> >  wrote:
>> Hey guys, sorry about sending confidential email earlier, didn't realize 
>> tutanota would encrypt it like that.
>>
>> My issue:
>>
>> On qubes version r4.0 after resuming from suspend networking isn't working. 
>> On qubes r3.2 this wasn't an issue.
>>
>>
>> After resuming from suspend to ram, networking on sys-net isn't working:
>> $ ip addr
>> 6: ens5:  mtu 1500 qdisc fq_codel state 
>> DOWN group default qlen 1000
>>     link/ether 4c:cc:6a:30:f5:90 brd ff:ff:ff:ff:ff:ff
>>
>> Usually it would be:
>> $ ip addr
>> 2: ens5:  mtu 1500 qdisc fq_codel state UP 
>> group default qlen 1000
>>     link/ether 4c:cc:6a:30:f5:90 brd ff:ff:ff:ff:ff:ff
>>     inet 192.168.5.11/24 brd 192.168.5.255 scope global dynamic ens5
>>    valid_lft 86362sec preferred_lft 86362sec
>>     inet6 fe80::e5b7:4276:7bc8:f799/64 scope link 
>>    valid_lft forever preferred_lft forever
>>
>> $ lspci
>> 00:05.0 Ethernet controller: Realtek Semiconductor Co., Ltd. 
>> RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 15)
>>
>> ---
>> This is the output of dmesg after resuming from suspend:
>>
>> [  182.294472] audit: type=1106 audit(1524300197.335:110): pid=2337 uid=0 
>> auid=1000 ses=1 msg='op=PAM:session_close 
>> grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix 
>> acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 
>> res=success'
>> [  182.294519] audit: type=1104 audit(1524300197.335:111): pid=2337 uid=0 
>> auid=1000 ses=1 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" 
>> exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'
>> [  250.187171] audit: type=1100 audit(1524300265.223:112): pid=2361 uid=0 
>> auid=4294967295 ses=4294967295 msg='op=PAM:authentication 
>> grantors=pam_rootok acct="root" exe="/usr/lib/qubes/qrexec-agent" hostname=? 
>> addr=? terminal=? res=success'
>> [  250.187533] audit: type=1103 audit(1524300265.223:113): pid=2361 uid=0 
>> auid=4294967295 ses=4294967295 msg='op=PAM:setcred grantors=pam_rootok 
>> acct="root" exe="/usr/lib/qubes/qrexec-agent" hostname=? addr=? terminal=? 
>> res=success'
>> [  250.209409] audit: type=1101 audit(1524300265.245:114): pid=2362 uid=0 
>> auid=4294967295 ses=4294967295 msg='op=PAM:accounting grantors=pam_unix 
>> acct="root" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? 
>> res=success'
>> [  250.209448] audit: type=1006 audit(1524300265.245:115): pid=2362 uid=0 
>> old-auid=4294967295 auid=0 tty=(none) old-ses=4294967295 ses=2 res=1
>> [  250.209470] audit: type=1105 audit(1524300265.245:116): pid=2362 uid=0 
>> auid=0 ses=2 msg='op=PAM:session_open 
>> grantors=pam_selinux,pam_selinux,pam_loginuid,pam_keyinit,pam_limits,pam_systemd,pam_unix
>>  acct="root" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? 
>> res=success'
>> [  250.251619] audit: type=1130 audit(1524300265.287:117): pid=1 uid=0 
>> auid=4294967295 ses=4294967295 msg='unit=user@0 comm="systemd" 
>> exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
>> [  250.254009] audit: type=1105 audit(1524300265.289:118): pid=2361 uid=0 
>> auid=4294967295 ses=4294967295 msg='op=PAM:session_open 
>> grantors=pam_keyinit,pam_limits,pam_systemd,pam_unix,pam_lastlog acct="root" 
>> exe="/usr/lib/qubes/qrexec-agent" hostname=? addr=? terminal=? res=success'
>> [  250.276190] audit: type=1106 audit(1524300265.311:119): pid=2361 uid=0 
>> auid=4294967295 ses=4294967295 msg='op=PAM:session_close 
>> grantors=pam_keyinit,pam_limits,pam_systemd,pam_unix,pam_lastlog acct="root" 
>> exe="/usr/lib/qubes/qrexec-agent" hostname=? addr=? terminal=? res=success'
>> [  250.276237] audit: type=1104 audit(1524300265.312:120): pid=2361 uid=0 
>> auid=4294967295 ses=4294967295 msg='op=PAM:setcred grantors=pam_rootok 
>> acct="root" exe="/usr/lib/qubes/qrexec-agent" hostname=? addr=? terminal=? 
>> res=success'
>> [  250.297413] audit: type=1131 audit(1524300265.332:121): pid=1 uid=0 
>> auid=4294967295 ses=4294967295 msg='unit=user@0 comm="systemd" 
>> exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
>> [  250.829214] Freezing user space processes ... (elapsed 0.000 seconds) 
>> done.
>> [  250.830138] OOM killer disabled.
>> [  250.830145] Freezing remaining freezable tasks ... (elapsed 0.000 
>> seconds) done.
>> [  250.833225] suspending xenstore...
>> [  256.412047] xen:events: Xen HVM callback vector for event delivery is 
>> enabled
>> [  256.412836] Xen Platform PCI: I/O protocol version 1
>> [  256.413015] xen:grant_table: Grant tables using version 

Re: [qubes-users] Difficulty after attempted template re-install

2018-04-21 Thread 'awokd' via qubes-users
On Fri, April 20, 2018 11:38 pm, trueriver wrote:

> Is that -root-tmp volume a sign of a bug, if so where?
>
>
> I am not confident of reproducing the bug, if indeed it is one.
>
>
> My gut feeling is that it may not enough to make a useful bugrep, but
> will do so if you or awokd think I should.
>
> One thought I had is how do I know f I run out of pool space - might that
> have triggered something like this or should I get an elegant warning?
> Certainly my disk space is overcommitted, with the magic of sparse files.

Don't think it's pool space related.

Tried to reproduce this. First installed the minimal template, tested,
then did an --action=reinstall. The menu shortcut for xterm stopped
working. Did a refresh applications in Qube Settings but that also failed
to start the template qube. qvm-run gave me "VM directory does not exist:
/var/lib/qubes/vm-templates/fedora-26-minimal" and ls confirms it does
not. This is probably a bug, possibly
https://github.com/QubesOS/qubes-issues/issues/3169. I'll make a note in
there.

At that point, "sudo dnf remove qubes-template-fedora-26-minimal" followed
by "sudo qubes-dom0-update qubes-template-fedora-26-minimal" worked with
no errors and restored proper function.

Next, I tested with the template running. --action=reinstall shutdown the
template before doing its work, but resulted in the same bug as before.

Doing a dnf remove with the template running failed with an error message
about same. I didn't see -root-tmp at any time; not sure what might have
created it.


-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/f21869159a903df753940e1d8984c2ef.squirrel%40tt3j2x4k5ycaa5zt.onion.
For more options, visit https://groups.google.com/d/optout.


Re: [qubes-users] suspend to ram, r8169, networking in sys-net not working after resume

2018-04-21 Thread Ivan Mitev
Hi,


On 04/21/2018 01:49 PM, argan...@tutanota.com wrote:
> Hey guys, sorry about sending confidential email earlier, didn't realize 
> tutanota would encrypt it like that.
> 
> My issue:
> 
> On qubes version r4.0 after resuming from suspend networking isn't working. 
> On qubes r3.2 this wasn't an issue.
> 
> 
> After resuming from suspend to ram, networking on sys-net isn't working:
> $ ip addr
> 6: ens5:  mtu 1500 qdisc fq_codel state 
> DOWN group default qlen 1000
>     link/ether 4c:cc:6a:30:f5:90 brd ff:ff:ff:ff:ff:ff
> 
> Usually it would be:
> $ ip addr
> 2: ens5:  mtu 1500 qdisc fq_codel state UP 
> group default qlen 1000
>     link/ether 4c:cc:6a:30:f5:90 brd ff:ff:ff:ff:ff:ff
>     inet 192.168.5.11/24 brd 192.168.5.255 scope global dynamic ens5
>    valid_lft 86362sec preferred_lft 86362sec
>     inet6 fe80::e5b7:4276:7bc8:f799/64 scope link 
>    valid_lft forever preferred_lft forever
> 
> $ lspci
> 00:05.0 Ethernet controller: Realtek Semiconductor Co., Ltd. 
> RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 15)
> 
> ---
> This is the output of dmesg after resuming from suspend:
> 
> [  182.294472] audit: type=1106 audit(1524300197.335:110): pid=2337 uid=0 
> auid=1000 ses=1 msg='op=PAM:session_close 
> grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix 
> acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 
> res=success'
> [  182.294519] audit: type=1104 audit(1524300197.335:111): pid=2337 uid=0 
> auid=1000 ses=1 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" 
> exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'
> [  250.187171] audit: type=1100 audit(1524300265.223:112): pid=2361 uid=0 
> auid=4294967295 ses=4294967295 msg='op=PAM:authentication grantors=pam_rootok 
> acct="root" exe="/usr/lib/qubes/qrexec-agent" hostname=? addr=? terminal=? 
> res=success'
> [  250.187533] audit: type=1103 audit(1524300265.223:113): pid=2361 uid=0 
> auid=4294967295 ses=4294967295 msg='op=PAM:setcred grantors=pam_rootok 
> acct="root" exe="/usr/lib/qubes/qrexec-agent" hostname=? addr=? terminal=? 
> res=success'
> [  250.209409] audit: type=1101 audit(1524300265.245:114): pid=2362 uid=0 
> auid=4294967295 ses=4294967295 msg='op=PAM:accounting grantors=pam_unix 
> acct="root" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? 
> res=success'
> [  250.209448] audit: type=1006 audit(1524300265.245:115): pid=2362 uid=0 
> old-auid=4294967295 auid=0 tty=(none) old-ses=4294967295 ses=2 res=1
> [  250.209470] audit: type=1105 audit(1524300265.245:116): pid=2362 uid=0 
> auid=0 ses=2 msg='op=PAM:session_open 
> grantors=pam_selinux,pam_selinux,pam_loginuid,pam_keyinit,pam_limits,pam_systemd,pam_unix
>  acct="root" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? 
> res=success'
> [  250.251619] audit: type=1130 audit(1524300265.287:117): pid=1 uid=0 
> auid=4294967295 ses=4294967295 msg='unit=user@0 comm="systemd" 
> exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
> [  250.254009] audit: type=1105 audit(1524300265.289:118): pid=2361 uid=0 
> auid=4294967295 ses=4294967295 msg='op=PAM:session_open 
> grantors=pam_keyinit,pam_limits,pam_systemd,pam_unix,pam_lastlog acct="root" 
> exe="/usr/lib/qubes/qrexec-agent" hostname=? addr=? terminal=? res=success'
> [  250.276190] audit: type=1106 audit(1524300265.311:119): pid=2361 uid=0 
> auid=4294967295 ses=4294967295 msg='op=PAM:session_close 
> grantors=pam_keyinit,pam_limits,pam_systemd,pam_unix,pam_lastlog acct="root" 
> exe="/usr/lib/qubes/qrexec-agent" hostname=? addr=? terminal=? res=success'
> [  250.276237] audit: type=1104 audit(1524300265.312:120): pid=2361 uid=0 
> auid=4294967295 ses=4294967295 msg='op=PAM:setcred grantors=pam_rootok 
> acct="root" exe="/usr/lib/qubes/qrexec-agent" hostname=? addr=? terminal=? 
> res=success'
> [  250.297413] audit: type=1131 audit(1524300265.332:121): pid=1 uid=0 
> auid=4294967295 ses=4294967295 msg='unit=user@0 comm="systemd" 
> exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
> [  250.829214] Freezing user space processes ... (elapsed 0.000 seconds) done.
> [  250.830138] OOM killer disabled.
> [  250.830145] Freezing remaining freezable tasks ... (elapsed 0.000 seconds) 
> done.
> [  250.833225] suspending xenstore...
> [  256.412047] xen:events: Xen HVM callback vector for event delivery is 
> enabled
> [  256.412836] Xen Platform PCI: I/O protocol version 1
> [  256.413015] xen:grant_table: Grant tables using version 1 layout
> [  256.458068] OOM killer enabled.
> [  256.458078] Restarting tasks ... done.
> [  256.567633] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
> [  256.567649] Warning! ehci_hcd should always be loaded before uhci_hcd and 
> ohci_hcd, not after
> [  256.569187] ehci-pci: EHCI 

[qubes-users] suspend to ram, r8169, networking in sys-net not working after resume

2018-04-21 Thread argand12
Hey guys, sorry about sending confidential email earlier, didn't realize 
tutanota would encrypt it like that.

My issue:

On qubes version r4.0 after resuming from suspend networking isn't working. On 
qubes r3.2 this wasn't an issue.


After resuming from suspend to ram, networking on sys-net isn't working:
$ ip addr
6: ens5:  mtu 1500 qdisc fq_codel state DOWN 
group default qlen 1000
    link/ether 4c:cc:6a:30:f5:90 brd ff:ff:ff:ff:ff:ff

Usually it would be:
$ ip addr
2: ens5:  mtu 1500 qdisc fq_codel state UP 
group default qlen 1000
    link/ether 4c:cc:6a:30:f5:90 brd ff:ff:ff:ff:ff:ff
    inet 192.168.5.11/24 brd 192.168.5.255 scope global dynamic ens5
   valid_lft 86362sec preferred_lft 86362sec
    inet6 fe80::e5b7:4276:7bc8:f799/64 scope link 
   valid_lft forever preferred_lft forever

$ lspci
00:05.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 
PCI Express Gigabit Ethernet Controller (rev 15)

---
This is the output of dmesg after resuming from suspend:

[  182.294472] audit: type=1106 audit(1524300197.335:110): pid=2337 uid=0 
auid=1000 ses=1 msg='op=PAM:session_close 
grantors=pam_keyinit,pam_limits,pam_keyinit,pam_limits,pam_systemd,pam_unix 
acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 
res=success'
[  182.294519] audit: type=1104 audit(1524300197.335:111): pid=2337 uid=0 
auid=1000 ses=1 msg='op=PAM:setcred grantors=pam_env,pam_unix acct="root" 
exe="/usr/bin/sudo" hostname=? addr=? terminal=/dev/pts/0 res=success'
[  250.187171] audit: type=1100 audit(1524300265.223:112): pid=2361 uid=0 
auid=4294967295 ses=4294967295 msg='op=PAM:authentication grantors=pam_rootok 
acct="root" exe="/usr/lib/qubes/qrexec-agent" hostname=? addr=? terminal=? 
res=success'
[  250.187533] audit: type=1103 audit(1524300265.223:113): pid=2361 uid=0 
auid=4294967295 ses=4294967295 msg='op=PAM:setcred grantors=pam_rootok 
acct="root" exe="/usr/lib/qubes/qrexec-agent" hostname=? addr=? terminal=? 
res=success'
[  250.209409] audit: type=1101 audit(1524300265.245:114): pid=2362 uid=0 
auid=4294967295 ses=4294967295 msg='op=PAM:accounting grantors=pam_unix 
acct="root" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? 
res=success'
[  250.209448] audit: type=1006 audit(1524300265.245:115): pid=2362 uid=0 
old-auid=4294967295 auid=0 tty=(none) old-ses=4294967295 ses=2 res=1
[  250.209470] audit: type=1105 audit(1524300265.245:116): pid=2362 uid=0 
auid=0 ses=2 msg='op=PAM:session_open 
grantors=pam_selinux,pam_selinux,pam_loginuid,pam_keyinit,pam_limits,pam_systemd,pam_unix
 acct="root" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? 
res=success'
[  250.251619] audit: type=1130 audit(1524300265.287:117): pid=1 uid=0 
auid=4294967295 ses=4294967295 msg='unit=user@0 comm="systemd" 
exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[  250.254009] audit: type=1105 audit(1524300265.289:118): pid=2361 uid=0 
auid=4294967295 ses=4294967295 msg='op=PAM:session_open 
grantors=pam_keyinit,pam_limits,pam_systemd,pam_unix,pam_lastlog acct="root" 
exe="/usr/lib/qubes/qrexec-agent" hostname=? addr=? terminal=? res=success'
[  250.276190] audit: type=1106 audit(1524300265.311:119): pid=2361 uid=0 
auid=4294967295 ses=4294967295 msg='op=PAM:session_close 
grantors=pam_keyinit,pam_limits,pam_systemd,pam_unix,pam_lastlog acct="root" 
exe="/usr/lib/qubes/qrexec-agent" hostname=? addr=? terminal=? res=success'
[  250.276237] audit: type=1104 audit(1524300265.312:120): pid=2361 uid=0 
auid=4294967295 ses=4294967295 msg='op=PAM:setcred grantors=pam_rootok 
acct="root" exe="/usr/lib/qubes/qrexec-agent" hostname=? addr=? terminal=? 
res=success'
[  250.297413] audit: type=1131 audit(1524300265.332:121): pid=1 uid=0 
auid=4294967295 ses=4294967295 msg='unit=user@0 comm="systemd" 
exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success'
[  250.829214] Freezing user space processes ... (elapsed 0.000 seconds) done.
[  250.830138] OOM killer disabled.
[  250.830145] Freezing remaining freezable tasks ... (elapsed 0.000 seconds) 
done.
[  250.833225] suspending xenstore...
[  256.412047] xen:events: Xen HVM callback vector for event delivery is enabled
[  256.412836] Xen Platform PCI: I/O protocol version 1
[  256.413015] xen:grant_table: Grant tables using version 1 layout
[  256.458068] OOM killer enabled.
[  256.458078] Restarting tasks ... done.
[  256.567633] ehci_hcd: USB 2.0 'Enhanced' Host Controller (EHCI) Driver
[  256.567649] Warning! ehci_hcd should always be loaded before uhci_hcd and 
ohci_hcd, not after
[  256.569187] ehci-pci: EHCI PCI platform driver
[  256.574784] r8169 Gigabit Ethernet driver 2.3LK-NAPI loaded
[  256.583753] r8169 :00:05.0 eth0: RTL8169 at 0xb2b0802a5000, 
4c:cc:6a:30:f5:90, XID 14100800 IRQ 77
[  256.583777] r8169 :00:05.0 eth0: jumbo 

[qubes-users] Confidential email from argand12

2018-04-21 Thread argand12
Hello,You have just received a confidential email via Tutanota (https://tutanota.com). Tutanota encrypts emails automatically end-to-end, including all attachments. You can reach your encrypted mailbox and also reply with an encrypted email with the following link:Show encrypted emailOr paste this link into your browser:https://app.tutanota.com/#mail/LAbmvyQ0PhuuXhnpOYd5qU6uF_B_igThis email was automatically generated for sending the link. The link stays valid until you receive a new confidential email from me.Kind regards,argand12



-- 
You received this message because you are subscribed to the Google Groups "qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit https://groups.google.com/d/msgid/qubes-users/LAbmw-l--R-0%40tutanota.com.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: New installation of Qubes OS stopped booting for no reason?

2018-04-21 Thread john

On 04/19/18 08:05, billollib-re5jqeeqqe8avxtiumw...@public.gmane.org wrote:

So, I had installed Qubes 4 on a triple boot laptop (Win 10, Netrunner linux, Qubes OS).  
It had installed fine, and I had booted up in Qubes three or four times, played with the 
VMS, ran firefox, poked around a little, and was happy.  Then I got busy with some other 
stuff and set it aside, and worked mostly in Netrunner for my "real" work.

Today, I come back to it, and Qubes won't boot.  I get the following errors on 
the boot screen:

Failed to load kernal modules

Kfd: kgd2kfd_probe failed

Reached target basic system



 dracut - initqueue[334]: Warning: dracut-initque time out - 
starting timeout script



Could not boot
/dev/mapper/qubes_dom0_root does not exist
/dev/qubes_dome0/root does not exist

and I'm dumped into the rescue prompt.

This has repeated three times.  I tried taking out my usb mouse (which has 
caused problems in the past, though not this), but that didn't change anything.

Is this some configuration thing, or did aliens from outer space corrupt my 
partition with their evil laptop killer ray and I need to reinstall -- I don't 
mind, since I'm just playing around with Qubes, but I'd rather fix it...

Thanks!

billo



sounds a bit like my meltdown in Q4 ; did you try the  troubleshooting 
choice then #1  enter LUKS passphrase  from the installation media  ? 
in my case it failed, and I finally gave up,  after  Awokd  kind of 
confirmed I was SOOL  or so


--
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/818ca905-cdcd-68d5-3303-16aaf445b0db%40riseup.net.
For more options, visit https://groups.google.com/d/optout.


[qubes-users] Re: Qubes 4 on USB Not Rebooting

2018-04-21 Thread john

On 04/20/18 14:58, Campbell wrote:

On Friday, April 20, 2018 at 11:01:16 AM UTC-7, Campbell wrote:

I have a problem with a Qubes 4 installation on USB that will not boot after 
the initial setup. It will restart after initial install, BIOS sees the drive 
as USB Qubes, boots into the configuration loader and eventually into the OS. 
Once there I can do everything and it all works including my Windows HVM 
yesterday.

But what I thought was a fluke is turning into a real problem.
USB will not boot again if I choose to shut down in the Qubes OS.

I've now tried this with 2 different size and manufacturer USB drives with same results. 
The BIOS sees either drive as simply a USB drive (instead of showing "USB 
Qubes") and apparently does not see the boot loader any more.

Please help!


I just now tried a Qubes 3.2 install and have the same problem. Once I restart 
after all configuration is done, it never boots again. Computer BIOS sees the 
disk but there is nothing to load, even though it rebooted successfully during 
the installation.



so, 1st you tried to install Q4.0 from USB installation media to another 
USB drive ?  that failed,  so then you tried to install Q3.2  from one 
USB drive installtion media  to  another USB drive ?  Is that right ?


and you've seen and followed this ?
--
Installing to a USB drive

Installing an operating system onto a USB drive can be a convenient and 
secure method of ensuring that your data is protected. Be advised that a 
minimum storage of 32 GB is required on the USB drive. This installation 
process may take longer than an installation on a standard hard disk. 
The installation process is identical to using a hard disk in 
conjunction with two exceptions:


Select the USB as the storage location for the OS.

Leave the option checked to “Automatically configure my Qubes 
installation to the disk(s) I selected and return me to the main menu”.


--

--
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To post to this group, send email to qubes-users@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/8e5f8815-aadc-e6c5-d6f1-12c52236d213%40riseup.net.
For more options, visit https://groups.google.com/d/optout.