[qubes-users] Right-click stops working in i3

2021-09-11 Thread 'keyandthegate' via qubes-users
I'm using Qubes on Librem 13v4 with i3. Sometimes my windows get in a state 
where right-click stops working (it does not bring up a context menu, doing 
nothing). I notice it happening in Firefox, KeyPassXC, and maybe other 
applications. Opening a new window fixes the issue. It also makes the toolbar 
search dropdown in firefox stop showing up.

It might be caused when my laptop suspends, I'm not sure.

Anyone have leads on where I might research how to fix this?

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/NgxQl0yd24weIjrBXm-6q077SR1k25bmkj5ZkbCmYRKx8eX94OpDMl822bUEPRsrgPvaa8up-ZXp528sQ5rVthKTQnNumQZiFUA_Xz5WU5E%3D%40protonmail.com.


[qubes-users] Re: Bitcoin Core RPC qvm-connect-tcp not working?

2021-06-10 Thread 'keyandthegate' via qubes-users
Wait, apparently I can't even "netcat -l 5000" and "telnet localhost 5000" in 
the same debian vm? Even if I clear the iptables and set the default policies 
to accept? Clearly I don't understand Qubes networking.

‐‐‐ Original Message ‐‐‐
On Thursday, June 10th, 2021 at 12:56 AM, keyandthegate 
 wrote:

> I tried in a fresh vm:
> user@my-new-qube:~$ qvm-connect-tcp 8332:bitcoind:8332
> Binding TCP 'bitcoind:8332' to 'localhost:8332'...
> user@my-new-qube:~$ telnet localhost 8332
> Trying ::1...
> Trying 127.0.0.1...
> Connected to localhost.
> Escape character is '^]'.
> Request refused
> 2021/06/09 17:50:53 socat[992] E waitpid(): child 993 exited with status 126
> Connection closed by foreign host.
>
> How would bitcoin core even know that I'm connecting from a different VM, if 
> it should also be as if from localhost?
>
> ‐‐‐ Original Message ‐‐‐
> On Thursday, June 10th, 2021 at 12:09 AM, keyandthegate 
>  wrote:
>
>> Hi, I'm following the instructions here: 
>> https://github.com/qubenix/qubes-whonix-bitcoin/blob/master/1_joinmarket.md
>>
>> After I run "qvm-connect-tcp 8332:bitcoind:8332" in the joinmarket vm
>>
>> "telnet localhost 8332" works from bitcoind vm, but does not work from the 
>> joinmarket vm, where it says "Connection closed by foreign host."
>>
>> I tried adding this to my bitcoin config:
>> rpcbind=127.0.0.1
>> rpcallowip=0.0.0.0/0
>> rpcbind=bitcoind
>> and then running "sudo systemctl restart bitcoind"
>> after reading:
>> https://bitcoin.stackexchange.com/questions/87943/unable-to-bind-any-endpoint-for-rpc-server
>> but it didn't help
>>
>> Is there anywhere I can find a working example of using qvm-connect-tcp to 
>> connect to bitcoin core RPC server from another vm?

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/ywwVKW8pR0A7nMqNanVjLtLPGz7TI_uJTuVoeFjfSOCO-hDt0Yz02H7dkKdTKa_MF4q5CtZH7g7cnluSk4UsD5YnYqo2ku2YCjgxm-P8B7U%3D%40protonmail.com.


[qubes-users] Re: Bitcoin Core RPC qvm-connect-tcp not working?

2021-06-09 Thread 'keyandthegate' via qubes-users
I tried in a fresh vm:
user@my-new-qube:~$ qvm-connect-tcp 8332:bitcoind:8332
Binding TCP 'bitcoind:8332' to 'localhost:8332'...
user@my-new-qube:~$ telnet localhost 8332
Trying ::1...
Trying 127.0.0.1...
Connected to localhost.
Escape character is '^]'.
Request refused
2021/06/09 17:50:53 socat[992] E waitpid(): child 993 exited with status 126
Connection closed by foreign host.

How would bitcoin core even know that I'm connecting from a different VM, if it 
should also be as if from localhost?

‐‐‐ Original Message ‐‐‐
On Thursday, June 10th, 2021 at 12:09 AM, keyandthegate 
 wrote:

> Hi, I'm following the instructions here: 
> https://github.com/qubenix/qubes-whonix-bitcoin/blob/master/1_joinmarket.md
>
> After I run "qvm-connect-tcp 8332:bitcoind:8332" in the joinmarket vm
>
> "telnet localhost 8332" works from bitcoind vm, but does not work from the 
> joinmarket vm, where it says "Connection closed by foreign host."
>
> I tried adding this to my bitcoin config:
> rpcbind=127.0.0.1
> rpcallowip=0.0.0.0/0
> rpcbind=bitcoind
> and then running "sudo systemctl restart bitcoind"
> after reading:
> https://bitcoin.stackexchange.com/questions/87943/unable-to-bind-any-endpoint-for-rpc-server
> but it didn't help
>
> Is there anywhere I can find a working example of using qvm-connect-tcp to 
> connect to bitcoin core RPC server from another vm?

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/H3mNENsf2wIkW-JEjQC89I_lGkkKf1_4ItuiO0muHse0Qs7eOw-qwA2pTdSD8PAfUrEJwcLLUAqz7aRCuO2JTCZzI3N3qc6CE70qf4P3vo4%3D%40protonmail.com.


[qubes-users] Bitcoin Core RPC qvm-connect-tcp not working?

2021-06-09 Thread 'keyandthegate' via qubes-users
Hi, I'm following the instructions here: 
https://github.com/qubenix/qubes-whonix-bitcoin/blob/master/1_joinmarket.md

After I run "qvm-connect-tcp 8332:bitcoind:8332" in the joinmarket vm

"telnet localhost 8332" works from bitcoind vm, but does not work from the 
joinmarket vm, where it says "Connection closed by foreign host."

I tried adding this to my bitcoin config:
rpcbind=127.0.0.1
rpcallowip=0.0.0.0/0
rpcbind=bitcoind
and then running "sudo systemctl restart bitcoind"
after reading:
https://bitcoin.stackexchange.com/questions/87943/unable-to-bind-any-endpoint-for-rpc-server
but it didn't help

Is there anywhere I can find a working example of using qvm-connect-tcp to 
connect to bitcoin core RPC server from another vm?

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/u_WMmFSUyvkv2qKy11G82EJgtlDFki_fOpKbnQLLqv0PE6_XzPttms6nFSvgmiHLnwYVyCti2uYrDjTM-xYCFdBtzdWwxS97yWfbCLG86LU%3D%40protonmail.com.


[qubes-users] Open sourcing my salt configs

2021-03-03 Thread 'keyandthegate' via qubes-users
I've been developing a lot of salt config for myself and I want to start open 
sourcing it so that I can:

- Ask for public security review
- Accept patches
- Help people use Qubes a little better, when I think Qubes supports 
anarchistic praxis and is a force of good in the world

I'm worried about the following things:

- I lose my security through obscurity, which I don't want to do without the 
help of at least one non-amateur code reviewer for anything I publish

- (and I'm not sure if the economics/incentives work out here such that I 
should be paying someone to help me with this or not)
- I don't want to publish anything security sensitive without code review 
because I don't want to harm people

Additionally, I'm not sure how to package salt config via a .deb package. Are 
there any existing examples of this?

As an example of what I want to publish, I've written some config to create a 
private debian package repo vm powered by a YAML file that lets you specify 
sources to download packages (e.g. that are hosted as github releases) and 
verify them via gpg, then provide them to vms. My motivation is towards the 
goal of being able to destroy and recreate my templates from salt at any time, 
because salt is not stateless (unlike e.g. nix or bazel), e.g. if you decide to 
no longer add a package repo, you have to manually remove it from the domain in 
addition to updating the salt config, and you may forget; being able to 
recreate templates solves the otherwise almost intractable problem of knowing 
your templates aren't out of sync; it also means you can exclude templates from 
backups if you're brave, which can save a lot of space.

Another example of some code I may want to publish:
(WARNING: I think this may have a critical security issue of exposing config 
files to domains they don't belong to, but I'm not sure. Would need to 
investigate before publishing)
This fixes TemplateNotFound errors when you try to jinja-include another file 
within a `file.managed` `template.jinja` file.

> # MAINTAIN: Removed when fixed: https://github.com/saltstack/salt/issues/31531
> 'patch salt issue 31531':
> cmd.run:
> - name: |
> if [[ ! -f "$XDG_DATA_HOME"/patched-salt-31531 ]]; then
> cat < sudo sed -i'' "s#if fn_.strip() and fn_.startswith(path):#if fn_.strip() and 
> (fn_.startswith(path) or path == '/'):#" 
> /usr/lib/python3/dist-packages/salt/fileclient.py && \
> if ! grep extra-filerefs /etc/qubes-rpc/qubes.SaltLinuxVM >/dev/null; then 
> sudo sed -i'' "s#salt-ssh#salt-ssh --extra-filerefs salt:///#" 
> /etc/qubes-rpc/qubes.SaltLinuxVM; fi
> CMD
> fi
> sudo mkdir -p "$XDG_DATA_HOME" || exit 1
> sudo touch "$XDG_DATA_HOME"/patched-salt-31531 || exit 1

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/NTmFFT8PX6ISKTuoQoqTiNW-F2DglyltdvA9x8wO347GSSL_I4S-c74Mbov1E4USGtpiWIpH39AZ-6lT3zzA9Uym9nxrRtGKGl8t_YsP3oE%3D%40protonmail.com.


[qubes-users] lag causes dropped or repeated keys

2020-12-16 Thread 'keyandthegate' via qubes-users
When my computer is laggy key sometimes the UI will freeze for a second, and 
then the last key i pressed before it froze will be repeated as if it was held 
down the entire time (I like setting my key repeat rather low). Sometimes key 
presses are also dropped.

This is really frustrating is there any way to fix this? I haven't had this 
problem on other operating systems.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/1E1JCLZdKZdB_YsW373JhW9tJ8ATpi4j77mosVblxPoFTv1lD386Y3uS6FuGs3FQcuZUpu33jqTmuukbEmyOncDx39RRYPzx4IMmJzmw3to%3D%40protonmail.com.


[qubes-users] Show kali desktop menu

2020-12-13 Thread 'keyandthegate' via qubes-users
How can I access the kali linux desktop menu I get when running kali as a 
standalone OS, within a kali appvm?

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/bP0lsVqpK87E7GBipVqVc0ht2iBtwqu_G0zZ0DOsORktcmS6Lm2fFA_27SI-9iz4KWzdLG3VunxLsHiTNHSXyyjdTYmvKR4RXF3GhE1voeo%3D%40protonmail.com.


[qubes-users] Re: btrfs for template/appvm

2020-12-11 Thread 'keyandthegate' via qubes-users
I have btrfs set up for dom0 and I'm using the refilnk driver, but my appvms 
themselves seem to be ext4? Where would I even set this? I don't see an option 
on the pool or on qvm-create.

‐‐‐ Original Message ‐‐‐
On Saturday, December 12, 2020 12:36 AM, keyandthegate 
 wrote:

> I want to use btrfs for the snapshots feature in my appvms.
>
> I know Qubes supports btrfs for dom0:
> https://github.com/QubesOS/qubes-issues/issues/2340
>
> Does Qubes support using btrfs in individual appvms?
>
> If not is there some other way I can get snapshots? It would make me less 
> afraid to make a mistake while using my computer.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/n4EkrjWxYlfN-K79x8PM7V5E_umrP_wNUt0im04cqM_q25I_sifr4igzF3IjLZPJZFUXuOGrBVPHgHhYWJFkJhxmy1iCm66fh6gjWVCTe6A%3D%40protonmail.com.


[qubes-users] Re: Upgrading primary HD size

2020-12-11 Thread 'keyandthegate' via qubes-users
For the record it's:

sudo btrfs filesystem resize +1t /
sudo btrfs filesystem resize +100g /
sudo btrfs filesystem resize +10g /
etc
Just keep spamming until it gives you errors.
‐‐‐ Original Message ‐‐‐
On Saturday, December 12, 2020 12:16 AM, keyandthegate 
 wrote:

> Oops, I forgot I'm using btrfs.
>
> ‐‐‐ Original Message ‐‐‐
> On Friday, December 11, 2020 11:14 AM, keyandthegate 
>  wrote:
>
>> Hi I recently upgraded to a new primary HD and these are the steps I've 
>> taken:
>> 1. plug the new HD in via USB
>> 2. boot from debian live
>> 3. use dd to copy my entire old HD to new HD
>> 4. use gdisk to convert from MBR to GPT
>> 5. use gparted to move the swap partition to the end of the drive, and 
>> resize the primary partition to use the remaining space
>> 6. swap in the new HD
>>
>> I read I need to resize the LVM thin pool but, I'm not seeing the right 
>> output from lvs.
>> Existing threads:
>> https://groups.google.com/g/qubes-users/c/D-on-hSX1Dc/m/Q3rbYGyvAAAJ
>> https://groups.google.com/g/qubes-users/c/w9CIDaZ3Cc4/m/0xvtMUrIAgAJ
>>
>> I also have a second 2TB drive with a second pool.
>>
>> lsblk output:
>> nvme0n1 259:0 0 7.3T 0 disk
>> ├─nvme0n1p3 259:3 0 15.4G 0 part
>> │ └─luks-[...] 253:1 0 15.4G 0 crypt [SWAP]
>> ├─nvme0n1p1 259:1 0 1G 0 part /boot
>> └─nvme0n1p2 259:2 0 7.3T 0 part
>> └─luks-[..] 253:0 0 7.3T 0 crypt /
>> [...]
>> sda 8:0 0 1.8T 0 disk
>> └─luks-[...] 253:2 0 1.8T 0 crypt
>> ├─qubes-poolhd0_tdata 253:4 0 1.8T 0 lvm
>> │ └─qubes-poolhd0-tpool 253:5 0 1.8T 0 lvm
>> [... my qubes on second HD]
>> └─qubes-poolhd0_tmeta 253:3 0 120M 0 lvm
>> └─qubes-poolhd0-tpool 253:5 0 1.8T 0 lvm
>> [... my qubes on second HD]
>> [...]
>>
>> $ qvm-pool -l
>> NAME DRIVER
>> varlibqubes file-reflink
>> linux-kernel linux-kernel
>> poolhd0_qubes lvm_thin
>>
>> $ sudo lvs -a
>> LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
>> [lvol0_pmspare] qubes ewi--- 120.00m
>> poolhd0 qubes twi-aotz-- 1.82t 69.41 43.01
>> [poolhd0_tdata] qubes Twi-ao 1.82t
>> [poolhd0_tmeta] qubes ewi-ao 120.00m
>> [... my qubes on second HD]
>>
>> Where have my Qubes on the first HD gone? They still work, but I don't see 
>> them in the output of these commands.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/1uMyvLjqD74dV6O6VEQMuT2jGpsakJNBHTXTfQQkhH0XaD_fsF1l7rDqrhb05weuSs1vUs-kKJVeKTdV69sNTsMdyoi3s6rjMZmRGp9S9mQ%3D%40protonmail.com.


[qubes-users] btrfs for template/appvm

2020-12-11 Thread 'keyandthegate' via qubes-users
I want to use btrfs for the snapshots feature in my appvms.

I know Qubes supports btrfs for dom0:
https://github.com/QubesOS/qubes-issues/issues/2340

Does Qubes support using btrfs in individual appvms?

If not is there some other way I can get snapshots? It would make me less 
afraid to make a mistake while using my computer.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/n-zCjtVD66DquYM302g2mGEt9l22COrptU1e5PVmBVKG-0q2WAhirsuzF8M7gSzW7xUf0w9oehg1PxJT2cfypl-AS_Uf11GBSld4qW_zFKM%3D%40protonmail.com.


[qubes-users] Re: Upgrading primary HD size

2020-12-11 Thread 'keyandthegate' via qubes-users
Oops, I forgot I'm using btrfs.

‐‐‐ Original Message ‐‐‐
On Friday, December 11, 2020 11:14 AM, keyandthegate 
 wrote:

> Hi I recently upgraded to a new primary HD and these are the steps I've taken:
> 1. plug the new HD in via USB
> 2. boot from debian live
> 3. use dd to copy my entire old HD to new HD
> 4. use gdisk to convert from MBR to GPT
> 5. use gparted to move the swap partition to the end of the drive, and resize 
> the primary partition to use the remaining space
> 6. swap in the new HD
>
> I read I need to resize the LVM thin pool but, I'm not seeing the right 
> output from lvs.
> Existing threads:
> https://groups.google.com/g/qubes-users/c/D-on-hSX1Dc/m/Q3rbYGyvAAAJ
> https://groups.google.com/g/qubes-users/c/w9CIDaZ3Cc4/m/0xvtMUrIAgAJ
>
> I also have a second 2TB drive with a second pool.
>
> lsblk output:
> nvme0n1 259:0 0 7.3T 0 disk
> ├─nvme0n1p3 259:3 0 15.4G 0 part
> │ └─luks-[...] 253:1 0 15.4G 0 crypt [SWAP]
> ├─nvme0n1p1 259:1 0 1G 0 part /boot
> └─nvme0n1p2 259:2 0 7.3T 0 part
> └─luks-[..] 253:0 0 7.3T 0 crypt /
> [...]
> sda 8:0 0 1.8T 0 disk
> └─luks-[...] 253:2 0 1.8T 0 crypt
> ├─qubes-poolhd0_tdata 253:4 0 1.8T 0 lvm
> │ └─qubes-poolhd0-tpool 253:5 0 1.8T 0 lvm
> [... my qubes on second HD]
> └─qubes-poolhd0_tmeta 253:3 0 120M 0 lvm
> └─qubes-poolhd0-tpool 253:5 0 1.8T 0 lvm
> [... my qubes on second HD]
> [...]
>
> $ qvm-pool -l
> NAME DRIVER
> varlibqubes file-reflink
> linux-kernel linux-kernel
> poolhd0_qubes lvm_thin
>
> $ sudo lvs -a
> LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
> [lvol0_pmspare] qubes ewi--- 120.00m
> poolhd0 qubes twi-aotz-- 1.82t 69.41 43.01
> [poolhd0_tdata] qubes Twi-ao 1.82t
> [poolhd0_tmeta] qubes ewi-ao 120.00m
> [... my qubes on second HD]
>
> Where have my Qubes on the first HD gone? They still work, but I don't see 
> them in the output of these commands.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/HA2V2H7xCHTxhIlQ7HvG9BdLmlOdOsZRfYJeFCkDQANMLsLwg5qBofGGTY388Wg709VswBrbt4f01UylsHfpXSqF2AkqFGYACWxrsnGf8lA%3D%40protonmail.com.


[qubes-users] Upgrading primary HD size

2020-12-11 Thread 'keyandthegate' via qubes-users
Hi I recently upgraded to a new primary HD and these are the steps I've taken:
1. plug the new HD in via USB
2. boot from debian live
3. use dd to copy my entire old HD to new HD
4. use gdisk to convert from MBR to GPT
5. use gparted to move the swap partition to the end of the drive, and resize 
the primary partition to use the remaining space
6. swap in the new HD

I read I need to resize the LVM thin pool but, I'm not seeing the right output 
from lvs.
Existing threads:
https://groups.google.com/g/qubes-users/c/D-on-hSX1Dc/m/Q3rbYGyvAAAJ
https://groups.google.com/g/qubes-users/c/w9CIDaZ3Cc4/m/0xvtMUrIAgAJ

I also have a second 2TB drive with a second pool.

lsblk output:
nvme0n1 259:0 0 7.3T 0 disk
├─nvme0n1p3 259:3 0 15.4G 0 part
│ └─luks-[...] 253:1 0 15.4G 0 crypt [SWAP]
├─nvme0n1p1 259:1 0 1G 0 part /boot
└─nvme0n1p2 259:2 0 7.3T 0 part
└─luks-[..] 253:0 0 7.3T 0 crypt /
[...]
sda 8:0 0 1.8T 0 disk
└─luks-[...] 253:2 0 1.8T 0 crypt
├─qubes-poolhd0_tdata 253:4 0 1.8T 0 lvm
│ └─qubes-poolhd0-tpool 253:5 0 1.8T 0 lvm
[... my qubes on second HD]
└─qubes-poolhd0_tmeta 253:3 0 120M 0 lvm
└─qubes-poolhd0-tpool 253:5 0 1.8T 0 lvm
[... my qubes on second HD]
[...]

$ qvm-pool -l
NAME DRIVER
varlibqubes file-reflink
linux-kernel linux-kernel
poolhd0_qubes lvm_thin

$ sudo lvs -a
LV VG Attr LSize Pool Origin Data% Meta% Move Log Cpy%Sync Convert
[lvol0_pmspare] qubes ewi--- 120.00m
poolhd0 qubes twi-aotz-- 1.82t 69.41 43.01
[poolhd0_tdata] qubes Twi-ao 1.82t
[poolhd0_tmeta] qubes ewi-ao 120.00m
[... my qubes on second HD]

Where have my Qubes on the first HD gone? They still work, but I don't see them 
in the output of these commands.

-- 
You received this message because you are subscribed to the Google Groups 
"qubes-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to qubes-users+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/qubes-users/qBw5cu4Bri3MVqSv1CiEgJ_gMUCxrtGfuSGH1fdcuR4VkOANzDeTmlyl4PCJV2goDp1yL8lzg5mnA5xsDTiKgYuE1J6-wRXaGQFGWW4B_iw%3D%40protonmail.com.