Re: Seeking a Terminal Emulator on Debian for "Passthrough" Printing

2024-01-13 Thread Richard Hector

On 14/01/24 03:59, Greg Wooledge wrote:

I have dealt with terminals with passthrough printers before, but it
was three decades ago, and I've certainly never heard of a printer
communicating *back* to the host over this channel


I've also set up passthrough printers on terminals - which were hanging 
off muxes ... it's a serial connection, so bidirectional commumination 
should be fine, and more recent printers would make use of that.


And in fact, when we ran out of mux ports, we even hung an extra 
terminal off the passthrough port, so bidirectional worked :-)


These were physical serial terminals, of course - I don't remember 
having to get a terminal emulator to do this. It also wasn't on Linux - 
some were on SCO, and the others might have been on some kind of 
mainframe - a government department. We weren't involved in that side of it.


Richard



Re: find question

2024-01-13 Thread Richard Hector

On 30/12/23 01:27, Greg Wooledge wrote:

On Fri, Dec 29, 2023 at 10:56:52PM +1300, Richard Hector wrote:

find $dir -mtime +7 -delete


"$dir" should be quoted.


Got it, thanks.


Will that fail to delete higher directories, because the deletion of files
updated the mtime?

Or does it get all the mtimes first, and use those?


It doesn't delete directories recursively.

unicorn:~$ mkdir -p /tmp/foo/bar
unicorn:~$ touch /tmp/foo/bar/file
unicorn:~$ find /tmp/foo -name bar -delete
find: cannot delete ‘/tmp/foo/bar’: Directory not empty


Understood.


But I suppose you're asking "What if it deletes both the file and the
directory, because they both qualify?"

In that case, you should use the -depth option, so that it deletes
the deepest items first.

unicorn:~$ find /tmp/foo -depth -delete
unicorn:~$ ls /tmp/foo
ls: cannot access '/tmp/foo': No such file or directory

Without -depth, it would try to delete the directory first, and that
would fail because the directory's not empty.

-depth must appear AFTER the pathnames, but BEFORE any other arguments
such as -mtime or -name.


Except that from the man page, -delete implies -depth. Maybe that's a 
GNUism; I don't know.



And how precise are those times? If I'm running a cron job that deletes
7-day-old directories then creates a new one less than a second later, will
that reliably get the stuff that's just turned 7 days old?


The POSIX documentation describes it pretty well:

-mtime n  The primary shall evaluate as true if the  file  modification
  time  subtracted  from  the  initialization  time, divided by
  86400 (with any remainder discarded), is n.

To qualify for -mtime +7, a file's age as calculated above must be at
least 8 days.  (+7 means more than 7.  It does not mean 7 or more.)


So 7 days and one second doesn't count as "more than 7 days"? It 
truncates the value to integer days before comparing?


Ah, yes, I see that now under -atime. Confusing. Thanks for pushing me 
to investigate :-)



It's not uncommon for the POSIX documentation of a command to be superior
to the GNU documentation of that same command, especially a GNU man page.
GNU info pages are often better, but GNU man pages tend to be lacking.


Understood, thanks. Though it might be less correct where GNUisms exist.

That leaves the question: When using -delete (and -depth), does the 
deletion of files within a directory update the mtime of that directory, 
thereby rendering the directory inelegible for deletion when it would 
have been before? Or is the mtime of that directory recorded before the 
contents are processed?


I just did a quick test (using -mmin -1 instead), and it did delete the 
whole lot.


So I'm still unclear why sometimes the top-level directory (or a 
directory within it) gets left behind. I've just noticed that one of the 
directories (not the one in $dir) contains a '@' symbol; I don't know if 
that affects it?


I'm tempted to avoid the problem by only using find for the top-level 
directory, and exec'ing "rm -r(f)" on it. I'm sure you'll tell me there 
are problems with that, too :-)


Apologies for the slow response - sometimes the depression kicks in and 
I don't get back to a problem for a while :-(


Cheers,
Richard



find question

2023-12-29 Thread Richard Hector

Hi all,

When using:

find $dir -mtime +7 -delete

Will that fail to delete higher directories, because the deletion of 
files updated the mtime?


Or does it get all the mtimes first, and use those?

And how precise are those times? If I'm running a cron job that deletes 
7-day-old directories then creates a new one less than a second later, 
will that reliably get the stuff that's just turned 7 days old? Or will 
there be a race condition depending on how quickly cron starts the 
script, which could be different each time?


Is there a better way to do this?

Cheers,
Richard



Re: lists

2023-12-20 Thread Richard Hector

On 21/12/23 11:55, Pocket wrote:


On 12/20/23 17:37, gene heskett wrote:

On 12/20/23 12:05, Pocket wrote:


On 12/20/23 11:51, gene heskett wrote:

On 12/20/23 08:30, Pocket wrote:
If I get one bounce email I am banned, I will never get to even 10% 
as 2% and I am gone.
That may be a side effect that your provider should address, or as 
suggested by others, change providers.



Actually I can not change as the ISP has exclusive rights to the high 
speed internet in the area I reside in.


No other providers are allowed.


You could use an email provider that is not your ISP.

Richard



Re: sid

2023-11-29 Thread Richard Hector

On 28/11/23 04:52, Michael Thompson wrote:

[lots of stuff]

Quick question - are you subscribed to the list? I notice you've replied 
a couple of times to your own emails, but not to any of the people 
who've offered suggestions. It's probably a good idea to subscribe, or 
at least check the archives:


https://lists.debian.org/debian-user/recent

Secondly, you say:

"I sent a big email a couple of days ago, which covered how you might 
work around that, but so far, it has not been fixed.

By my reckoning, it's been 6 days now."

Filing a bug may well be useful, but it should be done through the 
proper channels, not via a post on debian-user.


https://www.debian.org/Bugs/Reporting

Cheers,
Richard



Re: Default DNS lookup command?

2023-11-12 Thread Richard Hector

On 31/10/23 16:27, Max Nikulin wrote:

On 30/10/2023 14:03, Richard Hector wrote:

On 24/10/23 06:01, Max Nikulin wrote:

getent -s dns hosts zircon

Ah, thanks. But I don't feel too bad about not finding that ... 
'service' is not defined in that file, 'dns' doesn't occur, and 
searching for 'hosts' doesn't give anything useful either. I guess 
reading nsswitch.conf(5) is required.


Do you mean that "hosts" entry in your /etc/nsswitch.conf lacks "dns"? 
Even systemd nss plugins recommend to keep it as a fallback. If you get 
no results then your resolver or DNS server may not be configured to 
resolve single-label names. Try some full name


     getent -s dns ahosts debian.org


Sorry for the confusion (and delay) - I think I was referring to the 
getent man page, rather than the config file.


Richard



Re: systemd service oddness with openvpn

2023-11-12 Thread Richard Hector

On 12/11/23 04:47, Kamil Jońca wrote:

Richard Hector  writes:


Hi all,

I have a machine that runs as an openvpn server. It works fine; the
VPN stays up.


Are you sure? Have you client conneted and so on?


Yes. I can ssh to the machines at the other end.


However, after running for a while, I get these repeatedly in syslog:

Nov 07 12:17:24 ovpn2 openvpn[213741]: Options error: In [CMD-LINE]:1:
Error opening configuration file: opvn2.conf

Here you have something like typo (opvn2c.conf - I would expect ovpn2.conf)


Bingo - I was confused by the extra c, but that's not what you were 
referring to.


The logrotate postrotate line has

systemctl restart openvpn-server@opvn2

which is the source of the misspelling.

So it's trying to restart the wrong service.

To be honest, I haven't been very happy with the way the services get 
made up on the fly like that, only to fail ... it's bitten me in other 
ways before.


Thank you very much :-)

Richard



Re: systemd service oddness with openvpn

2023-11-11 Thread Richard Hector

On 7/11/23 12:41, Richard Hector wrote:

Hi all,

I have a machine that runs as an openvpn server. It works fine; the VPN 
stays up.


However, after running for a while, I get these repeatedly in syslog:


I don't know if anyone's watching, but ...

It appears that this happens when logrotate restarts openvpn. I just 
have "systemctl restart openvpn-server@ovpn2" in my postrotate section 
in the logrotate config.


I've seen other people recommend using 'copytrucate' instead of 
restarting openvpn, and others suggesting that openvpn should be 
configured to log via syslog, or that it should just log to stdout, and 
init (systemd) can then capture it.


I'm not sure what's the best option here - and I still don't know why 
restarting it causes this failure.


Richard



Re: Request to Establish a Debian Mirror Server for Bangladeshi Users

2023-11-07 Thread Richard Hector

On 8/11/23 17:10, Md Shehab wrote:

Dear Debian Community,

I hope this email finds you well. I am writing to propose the 
establishment of a Debian mirror server in Bangladesh


I am confident that a Debian mirror server in Bangladesh would be a 
valuable resource for the local tech community


I would like to request your support for this proposal. I am open to any 
suggestions or feedback you may have.


I suggest starting by reading here:

https://www.debian.org/mirror/ftpmirror

Cheers,
Richard



Re: systemd service oddness with openvpn

2023-11-06 Thread Richard Hector

On 7/11/23 12:41, Richard Hector wrote:

Hi all,

I have a machine that runs as an openvpn server. It works fine; the VPN 
stays up.


However, after running for a while, I get these repeatedly in syslog:


I should also have mentioned - this is debian bookworm (12.2)

Richard



systemd service oddness with openvpn

2023-11-06 Thread Richard Hector

Hi all,

I have a machine that runs as an openvpn server. It works fine; the VPN 
stays up.


However, after running for a while, I get these repeatedly in syslog:

Nov 07 12:17:24 ovpn2 openvpn[213741]: Options error: In [CMD-LINE]:1: 
Error opening configuration file: opvn2.conf

Nov 07 12:17:24 ovpn2 openvpn[213741]: Use --help for more information.
Nov 07 12:17:24 ovpn2 systemd[1]: openvpn-server@opvn2.service: Main 
process exited, code=exited, status=1/FAILURE
Nov 07 12:17:24 ovpn2 systemd[1]: openvpn-server@opvn2.service: Failed 
with result 'exit-code'.
Nov 07 12:17:24 ovpn2 systemd[1]: Failed to start 
openvpn-server@opvn2.service - OpenVPN service for opvn2.
Nov 07 12:17:29 ovpn2 openvpn[213770]: Options error: In [CMD-LINE]:1: 
Error opening configuration file: opvn2.conf

Nov 07 12:17:29 ovpn2 openvpn[213770]: Use --help for more information.
Nov 07 12:17:29 ovpn2 systemd[1]: openvpn-server@opvn2.service: Main 
process exited, code=exited, status=1/FAILURE
Nov 07 12:17:29 ovpn2 systemd[1]: openvpn-server@opvn2.service: Failed 
with result 'exit-code'.
Nov 07 12:17:29 ovpn2 systemd[1]: Failed to start 
openvpn-server@opvn2.service - OpenVPN service for opvn2.


This is the openvpn-server@.service:

[Unit]
Description=OpenVPN service for %I
After=network-online.target
Wants=network-online.target
Documentation=man:openvpn(8)
Documentation=https://community.openvpn.net/openvpn/wiki/Openvpn24ManPage
Documentation=https://community.openvpn.net/openvpn/wiki/HOWTO

[Service]
Type=notify
PrivateTmp=true
WorkingDirectory=/etc/openvpn/server
ExecStart=/usr/sbin/openvpn --status %t/openvpn-server/status-%i.log 
--status-version 2 --suppress-timestamps --config %i.conf
CapabilityBoundingSet=CAP_IPC_LOCK CAP_NET_ADMIN CAP_NET_BIND_SERVICE 
CAP_NET_RAW CAP_SETGID CAP_SETUID CAP_SETPCAP CAP_SYS_CHROOT 
CAP_DAC_OVERRIDE CAP_AUDIT_WRITE

LimitNPROC=10
DeviceAllow=/dev/null rw
DeviceAllow=/dev/net/tun rw
ProtectSystem=true
ProtectHome=true
KillMode=process
RestartSec=5s
Restart=on-failure

[Install]
WantedBy=multi-user.target

And this is my override.conf:

[Service]
ExecStart=
ExecStart=/usr/sbin/openvpn --status %t/openvpn-server/status-%i.log 
--status-version 2 --config %i.conf


(because I want timestamps)

As I say, the VPN is functioning, and systemctl status shows it's running.

Why would it firstly think it needs starting, and secondly fail to do 
so? The config file /etc/openvpn/server/ovpn2.conf file which it "fails 
to open" hasn't gone away ...


Any tips?

Note the machine is quite low powered; it's an old HP thin client. But 
this is all it does, and it seems to perform adequately.


Thanks,
Richard



Re: Default DNS lookup command?

2023-10-30 Thread Richard Hector

On 24/10/23 06:01, Max Nikulin wrote:

On 22/10/2023 18:39, Richard Hector wrote:

But not strictly a DNS lookup tool:

richard@zircon:~$ getent hosts zircon
127.0.1.1   zircon.lan.walnut.gen.nz zircon

That's from my /etc/hosts file, and overrides DNS. I didn't see an 
option in the manpage to ignore /etc/hosts.


getent -s dns hosts zircon

However /etc/resolv.conf may point to local systemd-resolved server or 
to dnsmasq started by NetworkManager and they read /etc/hosts by default.


Ah, thanks. But I don't feel too bad about not finding that ... 
'service' is not defined in that file, 'dns' doesn't occur, and 
searching for 'hosts' doesn't give anything useful either. I guess 
reading nsswitch.conf(5) is required.


Thanks,
Richard



Re: Default DNS lookup command?

2023-10-22 Thread Richard Hector

On 22/10/23 04:56, Greg Wooledge wrote:

On Sat, Oct 21, 2023 at 05:35:21PM +0200, Reiner Buehl wrote:

is there a DNS lookup command that is installed by default on any Debian


getent hosts NAME
getent ahostsv4 NAME

That said, you get much finer control from dedicated tools.



That is a useful tool I should remember.

But not strictly a DNS lookup tool:

richard@zircon:~$ getent hosts zircon
127.0.1.1   zircon.lan.walnut.gen.nz zircon

That's from my /etc/hosts file, and overrides DNS. I didn't see an 
option in the manpage to ignore /etc/hosts.


I haven't found a way to get just DNS results, without pulling in extra 
software.


Richard



Re: Fresh install, Bookworm, XFCE keeps recreating directories

2023-09-15 Thread Richard Hector

On 16/09/23 12:19, Curt Howland wrote:


Good evening. Did a fresh install of Bookworm, installing desktop with
XFCE.

I'm not interested in having directories like "Public" and "Videos",
but every time I delete them something recreates those directories.

I can't find where these are set to be created, and re-re-re created.

Is there a way to turn this off?


Have a look at the output of "apt show xdg-user-dirs" - looks like you 
need to edit .config/user-dirs.dirs


Cheers,
Richard



Re: how to change default nameserver?

2023-04-11 Thread Richard Hector

On 11/04/23 15:17, gene heskett wrote:

On 4/10/23 18:04, zithro wrote:


So, I got curious about his claim : "that change to resolv.conf adding 
the search line [search hosts, nameserver] has been required since red 
hat 5.0 in 1998".

(The bracket addition is mine)

I'm not using RHEl-based systems a lot so I may be wrong, and there's 
not a lot of material left from the 1998 web, but the resolv.conf file 
*looks* identical in RHEL-based systems, at least nowadays.
I quickly browsed a few RH help pages about resolv.conf, but couldn't 
find his claim.


I then searched for "search hosts, nameserver" on search engines 
(-with- the quotes, to only get full-match results).
Either I get no results or ... wait for it ... it *ONLY* gives me 
results where Gene posted !


So Gene, can you tell us where you read this ?


In a man page from a good 20 years ago. I still have a copy of that 
original redhat 5.0 on a shelf above me, but not a floppy drive to read 
those disks with.


Well, it's not in resolver(5) (which is for resolv.conf) on Red Hat 5.0.5.

Richard



Re: how to change default nameserver?

2023-04-10 Thread Richard Hector

On 11/04/23 15:17, gene heskett wrote:
In a man page from a good 20 years ago. I still have a copy of that 
original redhat 5.0 on a shelf above me, but not a floppy drive to read 
those disks with.


Downloading an iso ... :-)

Richard



Re: questions about cron.daily

2023-04-07 Thread Richard Hector

On 7/04/23 10:54, Greg Wooledge wrote:

On Thu, Apr 06, 2023 at 05:45:08PM -0500, David Wright wrote:

Users (including root) write their crontabs anywhere they like,
typically in a directory like ~/.cron/.


Is that... normal?  I can't say I've ever seen anyone keep a private
copy of their crontab in their home directory like that.

Most people just use "crontab -e" to edit the system's copy of their
personal crontab...


Perhaps if they want to keep it in version control?

Richard



Re: question about rc.local

2023-03-11 Thread Richard Hector

On 10/03/23 15:16, Corey Hickman wrote:



On Fri, Mar 10, 2023 at 9:44 AM > wrote:




I'm much happier with a "real" email client.




what real email client do you use? :)
I am using Mac as the regular desktop, Mac's Mail App is hard to use.
Though my server is debian system.


Thunderbird? Works on Debian as well, so you can keep using it when you 
upgrade :-)


Cheers,
Richard





Re: solution to / full

2023-03-02 Thread Richard Hector

On 2/03/23 06:00, Andy Smith wrote:

Hi,

On Wed, Mar 01, 2023 at 02:35:17PM +0100, lina wrote:

My / is almost full.

# df -h
Filesystem  Size  Used Avail Use% Mounted on
udev126G 0  126G   0% /dev
tmpfs26G  2.3M   26G   1% /run
/dev/nvme0n1p2   23G   21G  966M  96% /
tmpfs   126G   15M  126G   1% /dev/shm
tmpfs   5.0M  4.0K  5.0M   1% /run/lock
/dev/nvme0n1p6  267M   83M  166M  34% /boot
/dev/nvme0n1p1  511M  5.8M  506M   2% /boot/efi
/dev/nvme0n1p3  9.1G  3.2G  5.5G  37% /var
/dev/nvme0n1p5  1.8G   14M  1.7G   1% /tmp
/dev/nvme0n1p7  630G  116G  482G  20% /home


This is an excellent illustration of why creating tons of partitions
like it's 1999 can leave you in a difficult spot. You are bound to
make poor guesses as to what actual size you need, which leads later
situations where some partitions are hardly used while others get
full.


Of course you can also get into this situation if you had everything in 
one filesystem, and ran out of space, and had to split off /home, /var 
etc to save room ...


Richard



Re: Setting up bindfs mount in LXC container

2023-01-17 Thread Richard Hector

On 18/01/23 16:38, Max Nikulin wrote:

On 18/01/2023 03:52, Richard Hector wrote:

On 17/01/23 23:52, Max Nikulin wrote:


lxc.idmap = u 0 10 1000
lxc.idmap = u 1000 1000 1

lxc.mount.entry = /home/richard/sitename/doc_root 
srv/sitename/doc_root none bind,optional,create=dir


My goal is not to map container users to host users, but to allow a 
container user (human user) to access a directory as another container 
user (non-human owner of files). This should also be doable for 
multiple human users for the same site.


Do you mean mapping several users (human and service ones) from a single 
container to the same host UID? The approach I suggested works for 1:1 
mapping. Another technique is group permissions and ACLs, but I would 
not call it straightforward. A user may create a file that belongs to 
wrong group or inaccessible by another user.


I'll use more detail :-)

I have a Wordpress site. The directory /srv/sitename/doc_root, and most 
of the directories under it, are owned by user 'sitename'.


PHP runs as 'sitename-run', which has access (via group 'sitename') to 
read all of that, but not write it. Some subdirectories, eg 
.../doc_root/wp-content/uploads, are group-writeable so that it can save 
things there.


An authorised site maintainer, eg me ('richard') (but there may be any 
number of others), needs to be able to write under /srv/sitename, so I 
use bindfs to mount /srv/sitename under /home/richard/sitename, which 
presents it as owned by me, and translates the ownership back to 
'sitename' when I write to it. So each human user sees the site as owned 
by them, but it's all mapped to 'sitename' on the fly.


These users I guess map to host users, but I'm not particularly 
interested in that ... actually I should care more, because it actually 
maps to a real but unrelated user id on the host, which could have bad 
implications - but I think that's a separate issue.


I'm not ignoring the rest of your message; I'll look at that separately :-)

Cheers,
Richard



Re: Setting up bindfs mount in LXC container

2023-01-17 Thread Richard Hector

On 17/01/23 23:52, Max Nikulin wrote:

On 17/01/2023 04:06, Richard Hector wrote:


I'm using bindfs in my web LXC containers to allow particular users to 
write to their site docroot as the correct user.


I am not familiar with bindfs, so I may miss something important for 
your use case.


First of all I am unsure why you prefer bindfs instead of mapping some 
container users to host users using namespaces. With the following 
configuration 1000 inside a container and on the host is the same UID:


lxc.idmap = u 0 10 1000
lxc.idmap = u 1000 1000 1
lxc.idmap = u 1001 101001 64535
lxc.idmap = g 0 10 1000
lxc.idmap = g 1000 1000 1
lxc.idmap = g 1001 101001 64535

lxc.mount.entry = /home/richard/sitename/doc_root /srv/sitename/doc_root 
none bind,optional,create=dir


Disclaimer - I haven't actually tried any of your suggestions yet.

My goal is not to map container users to host users, but to allow a 
container user (human user) to access a directory as another container 
user (non-human owner of files). This should also be doable for multiple 
human users for the same site.



In /usr/local/bin/fuse.hook:


I would look into lxcfs hook for inspiration


Interesting; will do. Not sure exactly where to start, but will get there.


In /usr/local/bin/fuse.hook.s2:

lxc-device -n ${LXC_NAME} add /dev/fuse


Is there any reason why it can not be done using lxc.mount.entry in the 
container config?


Is that usable for adding a device file? The only way I found to do that 
is using lxc-device from outside the container. mknod inside doesn't work.



lxc-attach -n ${LXC_NAME} /usr/local/bin/bindfs_mount


I would consider adding a systemd unit inside container. Unsure if could 
be done using an udev rule.


That might be better, but it does need to rely on the device existing first.

If I don't use the at job, but run those commands manually after boot, 
it works fine with no error messages.


Unsure if it is relevant, but it is better to run lxc-start and 
lxc-attach as a systemd unit with Delegate=yes configuration, either a 
temporary one (systemd-run) or configured as a service. It ensures 
proper cgroup and scope. Otherwise some cryptic errors may happen.


So even for running stuff manually, run it from systemd? Interesting, 
will investigate further. I wasn't aware of systemd-run.


Thanks,
Richard



Setting up bindfs mount in LXC container

2023-01-16 Thread Richard Hector

Hi all,

I'm using bindfs in my web LXC containers to allow particular users to 
write to their site docroot as the correct user.


Getting this to work has been really hacky, and while it does seem to 
work, I get log messages saying it didn't ...


In /var/lib/lxc//config:

lxc.hook.start-host = /usr/local/bin/fuse.hook


In /usr/local/bin/fuse.hook:

#!/bin/bash
at now + 1 minute <>/var/log/lxc/${LXC_NAME}-hook-error.log
/usr/local/bin/fuse.hook.s2
END


In /usr/local/bin/fuse.hook.s2:

lxc-device -n ${LXC_NAME} add /dev/fuse
lxc-attach -n ${LXC_NAME} /usr/local/bin/bindfs_mount


In /usr/local/bin/bindfs_mount (in the container):

#!/bin/bash
file='/usr/local/etc/bindfs_mounts'
while read line; do
  mount "${line}"
done < "${file}"


In /usr/local/etc/bindfs_mounts (in the container):

/home/richard//doc_root


In /etc/fstab (in the container) (single line wrapped by MUA):

/srv//doc_root /home/richard//doc_root fuse.bindfs 
noauto,--force-user=richard,--force-group=richard,--create-for-user=,--create-for-group= 
0 0



I'm sure shell experts (or LXC experts) will tell me this 2-stage 
process is unnecessary, or that there is a better way to do it, but IIRC 
it doesn't work if lxc is waiting for the hook to finish; other stuff 
needs to happen before the device creation works.



At boot, however, I get these messages emailed from the at job (3 lines, 
wrapped by MUA):


lxc-device: : commands.c: lxc_cmd_add_bpf_device_cgroup: 
1185 Message too long - Failed to add new bpf device cgroup rule
lxc-device: : lxccontainer.c: add_remove_device_node: 
4657 set_cgroup_item failed while adding the device node
lxc-device: : tools/lxc_device.c: main: 153 Failed to add 
/dev/fuse to 



The device file is created correctly, and the mount work.

Oh - and interestingly, this only seems to happen when the host boots. 
If I just reboot (or shutdown and start) the container, it works fine.


It doesn't matter if I increase the delay on the at job.

If I don't use the at job, but run those commands manually after boot, 
it works fine with no error messages.


Any hints?

I suspect my limited understanding of cgroups is contributing to my 
problems ...


Cheers,
Richard



Re: bindfs for web docroot - is this sane?

2022-10-11 Thread Richard Hector

On 12/10/22 00:26, Dan Ritter wrote:

Richard Hector wrote:

Hi all,

I host a few websites, mostly Wordpress.

I prefer to have the site files (mostly) owned by an owner user, and php-fpm
runs as a different user, so that it can't write its own code. For uploads,
those directories are group-writeable.

Then for site developers (who might be contractors to my client) to be able
to update teh site, they need read/write access to the docroot, but I don't
want them all logging in using the same account/credentials.

So I've set up bindfs ( https://bindfs.org/ ) with the following fstab line
(example at this stage):

/srv/wptest-home/doc_root /home/richard/wptest-home/doc_root fuse.bindfs 
--force-user=richard,--force-group=richard,--create-for-user=wptest-home,--create-for-group=wptest-home
0 0

That means they can see their own 'view' of the docroot under their own home
directory, and they can create files as needed, which will have the correct
owner under /srv. I haven't yet looked at what happens with the uploaded and
cached files which are owned by the php user; hopefully that works ok.

This means I don't need to worry about sudo and similar things, or
chown/chgrp - which in turn means I should be able to offer sftp as an
alternative to full ssh logins. It can probably even be chrooted.

Does that sound like a sane plan? Are there gotchas I haven't spotted?


That's a solution which has worked in similar situations in the
past, but it runs into problems with accountability and
debugging.

The better solution is to use a versioning system -- git is the
default these days, subversion will certainly work -- and
require your site developers to make their changes to the
version controlled repository. The repo is either automatically
(cron, usually) or manually (dev sends an email or a ticket)
updated on the web host.


I agree that a git-based deployment scheme would be good. However, I 
understand that it's considered bad practice for the docroot to itself 
be a git repo, which means writing scripts to check out the right 
version and then deploy it (which might also help with setting the right 
permissions).


I'm also not entirely comfortable with either a cron or ticket-based 
trigger - I'd want to look into either git hooks (but that's on the 
wrong machine), or maybe a webapp with a deploy button.


And then there's the issue of what is in git and what isn't, and how to 
customise the installation after checkout - eg setting the site name/url 
to distinguish it from the dev/staging site or whatever, setting db 
passwords etc. More stuff for the deployment script to do, I guess.


So I like this idea, but it's a lot more work. And I have to convince my 
clients and/or their devs to use it, which might require learning git. 
And I'm not necessarily good enough at git myself to do that teaching well.



- devs don't get accounts on the web host at all


They might need it anyway, for running wp cli commands etc (especially 
given the privilege separation which means that installing plugins via 
the WP admin pages won't work - or would you include the plugins in the 
git repo?)



- you can resolve the conflicts of two people working on the
   same site

True.


- automatic backups, assuming you have a repo not on this server


I have backups of the web server; backups of the repo as well would be good.


- easy revert to a previous version


True.


- easy deployment to multiple servers for load balancing

True, though I'm not at that level at this point.


Drawbacks:

- devs have to have a local webserver to test their changes

Yes, or a dev server/site provided by me


- devs have to follow the process

And have to know how, yes


- someone has to resolve conflicts or decide what the deployed
   version is

True anyway

Note that this method doesn't stop the dev(s) using git anyway.

In summary, I think I want to offer a git-based method, but I think it 
would work ok in combination with this, which is initially simpler.


It sounds like there's nothing fundamentally broken about it, at least :-)

Cheers,
Richard



Re: bindfs for web docroot - is this sane?

2022-10-11 Thread Richard Hector

On 11/10/22 22:40, hede wrote:

On 11.10.2022 10:03 Richard Hector wrote:

[...]
Then for site developers (who might be contractors to my client) to be
able to update teh site, they need read/write access to the docroot,
but I don't want them all logging in using the same
account/credentials.
[...]
Does that sound like a sane plan? Are there gotchas I haven't spotted?


I think I'm not able to assess the bind-mount question, but...
Isn't that a use case for ACLs? (incl. default ACLs for the webservers 
user here?)


Yes, probably. However, I looked at ACLs earlier (months ago at least), 
and they did my head in ...


Files will then still be owned by the user who created them. But your 
default-user has all  (predefined) rights on them.


Having them owned by the user that created them is good for 
accountability, but bad for glancing at ls output to see if everything 
looks right.


I'd probably prefer that because - by instinct - I have a bad feeling 
regarding security if one user can slip/foist(?) a file to be "created" 
by some other user. But that's only a feeling without knowing all the 
circumstances.


They can only have it owned by one specific user, but I acknowledge 
possible issues there.


And this way it's always clear which users have access by looking at the 
ACLs while elsewhere defined bind mount commands are (maybe) less 
transparent. And you always knows who created them, if something goes 
wrong, for example.


Nothing is clear to me when I look at ACLs :-) I do have the output of 
'last' (for a while) to see who is likely to have created them.


On the other hand, if you know of a good resource for better 
understanding ACLs, preferably with examples that are similar to my use 
case, I'd love to see it :-)


?) I'm not native English and slip or foist are maybe the wrong terms / 
wrongly translated. The context is that one user creates files and the 
system marks them as "created by" some other user.


Seem fine to me :-) But they're owned by the other user; I wouldn't 
assume that that user created them. Especially when that user isn't 
directly a person.


Thanks,
Richard



bindfs for web docroot - is this sane?

2022-10-11 Thread Richard Hector

Hi all,

I host a few websites, mostly Wordpress.

I prefer to have the site files (mostly) owned by an owner user, and 
php-fpm runs as a different user, so that it can't write its own code. 
For uploads, those directories are group-writeable.


Then for site developers (who might be contractors to my client) to be 
able to update teh site, they need read/write access to the docroot, but 
I don't want them all logging in using the same account/credentials.


So I've set up bindfs ( https://bindfs.org/ ) with the following fstab 
line (example at this stage):


/srv/wptest-home/doc_root /home/richard/wptest-home/doc_root fuse.bindfs 
--force-user=richard,--force-group=richard,--create-for-user=wptest-home,--create-for-group=wptest-home 
0 0


That means they can see their own 'view' of the docroot under their own 
home directory, and they can create files as needed, which will have the 
correct owner under /srv. I haven't yet looked at what happens with the 
uploaded and cached files which are owned by the php user; hopefully 
that works ok.


This means I don't need to worry about sudo and similar things, or 
chown/chgrp - which in turn means I should be able to offer sftp as an 
alternative to full ssh logins. It can probably even be chrooted.


Does that sound like a sane plan? Are there gotchas I haven't spotted?

Cheers,
Richard



Re: nginx.conf woes

2022-10-10 Thread Richard Hector

On 3/10/22 02:07, Patrick Kirk wrote:

Hi all,

I have 2 sites to run from one server.  Both are based on ASP.Net Core.  
Both have SSL certs from letsencrypt.  One works perfectly.  The other 
sort of works.


Firstly, I notice that cleardragon.com and kirks.net resolve to 
different addresses, though maybe cloudflare forwards kirks.net to the 
same place. But the setups are different.


Or maybe you're using a different dns or other system to reach your 
pre-production system.


If I go to http://localhost:5100 by redirecting to 
https://localhost:5101 and then it warns of an invalid certificate.


I'm a bit unclear on this - I guess these are both the upstreams? The 
upstream (ASP thing?) also redirects http to https?


Is nginx supposed to handle its upstream redirecting it to https?

Anyway, the invalid cert is expected, because you presumably don't have 
a cert for 'localhost'.


If 
I try lynx http://cleardragon.com a similar redirect takes place and I 
get a "Alert!: Unable to connect to remote host" error and lynx closes down.


I see a website, but then again, maybe I'm looking at the production 
site and you're not.


This redirect is presumably the one in your nginx config, rather than 
the one done by the upstream.


Does connecting explicitly to https://cleardragon.com also fail?



When I do sudo tail -f /var/log/nginx/error.log I see: 2022/10/02 
12:44:22 [notice] 1624399#1624399: signal process started


I don't know about this - lots of people report it, but I don't see 
answers. But it's a notice rather than an error.


Cheers,
Richard



Re: Thoughts on logcheck?

2022-07-30 Thread Richard Hector

On 30/07/22 10:20, Andy Smith wrote:

Hello,

On Fri, Jul 29, 2022 at 04:30:19PM +1200, Richard Hector wrote:

My thought is to configure rsyslog to create extra logfiles, equivalent to
syslog and auth.log (the two files that logcheck monitors by default), which
only log messages at priority 'warning' or above, and configure logcheck to
monitor those instead. This should cut down the amount of filter maintenance
considerably.

Does this sound like a reasonable idea?


Personally I wouldn't (and don't) do it. It sounds like a bunch of
work only to end up with things that get logged anyway (as you
noted) plus the risk of missing other interesting things.


I started by enabling the extra logs on one system. I found I saw _more_ 
interesting things, because they weren't hidden by mountains of other 
stuff. That's in the boot-time kernel messages, btw. I only got 14 lines 
(total, not filtered by logcheck) when I was only showing warning or 
higher, rather than the screeds I normally see. I never had time to go 
through all those, even to read and understand them, let alone write 
filters, and having to decide what was important, what not, and whether 
the same messages with different values would be.


I think this will be useful to me, and the work isn't much because it's 
the same for every system (or at least every system that runs logcheck), 
which I can push out with ansible, where the filters have to be much 
more system- (or service-)specific.


The full logs are of course still there if I need to go back and look 
for something.



I don't find writing logcheck filters to be a particularly big time
sink. But if you do then it might alter the balance for you.


Thanks for your input :-)

Richard



Thoughts on logcheck?

2022-07-28 Thread Richard Hector

Hi all,

I've used logcheck for ages, to email me about potential problems from 
my log files.


I end up spending a lot of time scanning the emails, and then 
occasionally a bunch of time updating the filter rules to stop most of 
those messages coming through.


My thought is to configure rsyslog to create extra logfiles, equivalent 
to syslog and auth.log (the two files that logcheck monitors by 
default), which only log messages at priority 'warning' or above, and 
configure logcheck to monitor those instead. This should cut down the 
amount of filter maintenance considerably.


Does this sound like a reasonable idea?

A quick test does show that I'll still get messages I can't do much 
about - eg I telnetted to the ssh port and closed the connection, and my 
logfile reported that interaction as an error. That kind of thing should 
still be easily filtered, though.


I think I'd want to create a completely fresh set of filters, rather 
than using the supplied defaults, but I'm not sure about that yet.


Cheers,
Richard



Re: Synaptic missing in "Bookworm"

2022-06-30 Thread Richard Hector

On 1/07/22 12:08, Peter Hillier-Brook wrote:

anyone with thoughts, or info about Synaptic missing in "Bookworm"?



https://tracker.debian.org/pkg/synaptic

Richard



Re: regarding firewall discussion

2022-06-03 Thread Richard Hector

On 2/06/22 05:26, Joe wrote:

On Tue, 31 May 2022 03:17:52 +0100
mick crane  wrote:


regarding firewall discussion I'm uncertain how firewalls are
supposed to work.
I think the idea is that nothing is accepted unless it is in response
to a request.
What's to stop some spurious instructions being sent in response to
genuine request?


Nothing really, but the reply can only come from the site you made the
request to.


A source IP address can be faked.

Richard



Re: grep: show matching line from pattern file

2022-06-03 Thread Richard Hector

On 3/06/22 07:17, Greg Wooledge wrote:

On Thu, Jun 02, 2022 at 03:12:23PM -0400, duh wrote:


> > Jim Popovitch wrote on 28/05/2022 21:40:
> > > I have a file of regex patterns and I use grep like so:
> > > 
> > >  ~$ grep -f patterns.txt /var/log/syslog
> > > 
> > > What I'd like to get is a listing of all lines, specifically the line

> > > numbers of the regexps in patterns.txt, that match entries in
> > > /var/log/syslog.   Is there a way to do this?



$cat -n /var/log/syslog | grep warn

and it found "warn" in the syslog file and provided line numbers. I have
not used the -f option


You're getting the line numbers from the log file.  The OP wanted the line
numbers of the patterns in the -f pattern file.

Why?  I have no idea.  There is no standard option to do this, because
it's not a common requirement.  That's why I wrote one from scratch
in perl.



I don't know what the OP's use case is, but here's an example I might use:

I have a bunch of custom ignore files for logcheck. After a software 
upgrade, I might want to check which patterns no longer match anything, 
and can be deleted or modified.


I'd really still want to check with real egrep, though, rather than 
using perl's re engine instead.


Cheers,
Richard



Re: Permanent email address?

2022-05-16 Thread Richard Hector

On 16/05/22 05:11, Dan Ritter wrote:

I note that nobody owns rhkramer.org:

$ host rhkramer.org
Host rhkramer.org not found: 3(NXDOMAIN)

NXDOMAIN means no such domain.


Not quite. It doesn't mean no-one owns it; it just means (IIRC) there's 
no A or  record for that domain. www.rhkramer.org could exist, for 
example.


Instead, check with:

$ whois rhkramer.org
NOT FOUND

[snip legalese]

... which shows that it is indeed available.

Richard



Re: wtf just happened to my local staging web server

2022-05-11 Thread Richard Hector

On 5/05/22 19:57, Stephan Seitz wrote:

Am Do, Mai 05, 2022 at 09:30:42 +0200 schrieb Klaus Singvogel:

I think there are more.


Yes, I only know wtf as ...


Yes, but such language is not permitted on this list.

Richard



Re: stretch with bullseye kernel?

2022-05-04 Thread Richard Hector

On 4/05/22 18:57, Tixy wrote:

On Wed, 2022-05-04 at 00:44 +0300, IL Ka wrote:

Linux kernel is backward compatible. Linus calls it "we do not break
userspace".
That means _old_  applications should work on new kernel


There's also the issue of what config options the kernel is built with.
I'm sure there's been at least one time in the past where for a new
Debian release they've had to enable a kernel feature that the new
systemd (or udev?) wanted. But again, a case like that would stop a new
Debian working on and old kernel, not the other way around as the OP is
intending. I don't expect the Debian kernel maintainers would _remove_
kernel config options needed in a prior release.



Thanks all - I thought it would probably be safe, and indeed everything 
seems to be working :-)


Cheers,
Richard



stretch with bullseye kernel?

2022-05-03 Thread Richard Hector

Hi all,

For various reasons, I have some stretch LXC containers, on a buster 
host that I now need to upgrade. That will mean they end up running on 
buster's 5.10 kernel.


Is that likely to be a problem?

If so, I guess I can leave the host on buster's kernel for the time 
being, but that's obviously not ideal.


Hopefully the stretch containers can/will be either upgraded or 
dispensed with soon ...


Cheers,
Richard



Re: Libreoffice: printing "dirties" the file being printed

2022-04-11 Thread Richard Hector

On 9/04/22 00:17, gene heskett wrote:

IMO its up to the pdf
interpretor to make the pdf its handed fit the printer. Period, IMO it is
not open for discussion.


"Make it fit" might include scaling. You don't necessarily want that 
happening automatically - what if you're printing something like a 
circuit board design (not likely from LO, I admit), to be transferred 
directly onto the pcb?


Cheers,
Richard



Re: libvirt tools and keyfiles

2022-04-02 Thread Richard Hector




On 2022-04-01, Celejar  wrote:



What is going on here? Since I'm specifying a keyfile on the command
line, and it's being used - otherwise I wouldn't even get the list of
VMs - why am I being prompted for the password?

Celejar


Apologies for replying to the wrong message - I've deleted the original.

Are you really getting prompted for the password for the host system? 
You're not talking about the login prompt on the console of the VM?


Also, by adding my normal user on the host system to the libvirt group, 
it's not necessary to ssh as root - I can just use my normal user. In 
fact I don't allow root logins, so I can't directly test your commands.


Oh, and I assume the doubled '-c' is a typo :-)

Cheers,
Richard



Re: OT EU-based Cloud Service

2022-03-18 Thread Richard Hector

On 18/03/22 21:14, Byung-Hee HWANG wrote:

https://hetzner.cloud

German company, a single VPS cost is about 5€ per month.

Oh Nuremberg! Racing Circuit, fantastic!!


Um - you might be thinking of Nürburg? Home of the Nürburgring? :-)

Nuremburg has other associations in my mind, but I'm sure it's a fine 
place for a data centre.


Cheers,
Richard



Re: cups/avahi-daemon - worrying logs

2022-03-17 Thread Richard Hector

On 17/03/22 19:37, mick crane wrote:

On 2022-03-17 05:09, Richard Hector wrote:

On 8/03/22 13:25, Richard Hector wrote:

Hi all,

I've recently set up a small box to run cups, to provide network 
access to a USB-only printer. It's a 32-bit machine running bullseye.


I'm seeing log messages like these:

Mar  7 15:47:47 whio avahi-daemon[310]: Record 
[Brother\032HL-2140\032\064\032whio._ipps._tcp.local#011IN#011SRV 0 
0 631 whio.local ; ttl=120] not fitting in legacy unicast packet, 
dropping.
Mar  7 15:47:47 whio avahi-daemon[310]: Record 
[whio.local#011IN#011 fe80::3e4a:92ff:fed3:9e16 ; ttl=120] not 
fitting in legacy unicast packet, dropping.
Mar  7 15:47:48 whio avahi-daemon[310]: Record 
[Brother\032HL-2140\032\064\032whio._ipp._tcp.local#011IN#011SRV 0 0 
631 whio.local ; ttl=120] not fitting in legacy unicast packet, 
dropping.
Mar  7 15:47:48 whio avahi-daemon[310]: Record 
[whio.local#011IN#011 fe80::3e4a:92ff:fed3:9e16 ; ttl=120] not 
fitting in legacy unicast packet, dropping.


Those link-local IPv6 addresses belong to the machine itself. It 
currently has no other IPv6 address(es) (other than loopback), but I 
should probably set that up.


Any hints as to what's going on?

Most of the hits I get from a web search are full of 'me too' with no 
answers.


Nobody? Not even another 'me too'? :-)

Any suggestions for further/better questions to ask, or info to provide?


I have no idea. Could it be something to do with this old report ?
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=517683


I don't think so - that seems to relate to too many packets, rather than 
packets being too big.



I would probably fiddle about to get rid of the IPV6 thing.


I don't want to get rid of IPv6 - I use it. It could be connected with 
IPv6 due to IPv6 (IIRC) not allowing fragmentation of packets.


Cheers,
Richard



Re: cups/avahi-daemon - worrying logs

2022-03-16 Thread Richard Hector

On 8/03/22 13:25, Richard Hector wrote:

Hi all,

I've recently set up a small box to run cups, to provide network access 
to a USB-only printer. It's a 32-bit machine running bullseye.


I'm seeing log messages like these:

Mar  7 15:47:47 whio avahi-daemon[310]: Record 
[Brother\032HL-2140\032\064\032whio._ipps._tcp.local#011IN#011SRV 0 0 
631 whio.local ; ttl=120] not fitting in legacy unicast packet, dropping.
Mar  7 15:47:47 whio avahi-daemon[310]: Record 
[whio.local#011IN#011 fe80::3e4a:92ff:fed3:9e16 ; ttl=120] not 
fitting in legacy unicast packet, dropping.
Mar  7 15:47:48 whio avahi-daemon[310]: Record 
[Brother\032HL-2140\032\064\032whio._ipp._tcp.local#011IN#011SRV 0 0 
631 whio.local ; ttl=120] not fitting in legacy unicast packet, dropping.
Mar  7 15:47:48 whio avahi-daemon[310]: Record 
[whio.local#011IN#011 fe80::3e4a:92ff:fed3:9e16 ; ttl=120] not 
fitting in legacy unicast packet, dropping.


Those link-local IPv6 addresses belong to the machine itself. It 
currently has no other IPv6 address(es) (other than loopback), but I 
should probably set that up.


Any hints as to what's going on?

Most of the hits I get from a web search are full of 'me too' with no 
answers.


Nobody? Not even another 'me too'? :-)

Any suggestions for further/better questions to ask, or info to provide?

Thanks,
Richard



Re: voltage monitoring Q

2022-03-15 Thread Richard Hector

On 13/03/22 21:15, gene heskett wrote:

they are the last seacrate drives I'll own... Ever.


Lots of brands seem to go through bad patches. Even just bad batches.

For stuff I care about, I use RAID1 (mdraid), on NAS drives, from mixed 
manufacturers. So I'll have a pair consisting of a Seagate IronWolf and 
a WD Red. That way, if one dies, it's less likely that the other will 
immediately follow suit.


Except if I'm buying a 'server' from the local shop - they insist that 
mixing drives will be less reliable, but don't give reasons or evidence.
I insisted for my home backup server, with a Supermicro server board, 
but they put 'Custom Workstation' on the invoice - I don't think they 
were prepared to call it a server.


Richard



Re: Launch a minimal MATE DE

2022-03-11 Thread Richard Hector

On 9/03/22 04:06, David Wright wrote:

On Tue 08 Mar 2022 at 07:00:08 (+0100), to...@tuxteam.de wrote:

On Tue, Mar 08, 2022 at 01:54:11PM +1300, Richard Hector wrote:

[...]

> Just to solve the infinite recursion problem:
> 
> richard@zircon:~$ apt-file search bin/apt-file

> apt-file: /usr/bin/apt-file
> 
> so install the apt-file package :-)


Oh, a recursive fishing rod :-)


Sorry to disappoint, but I think we're dealing with
repetition here, rather than recursion:


Perhaps both.

Cheers,
Richard



Re: Launch a minimal MATE DE

2022-03-07 Thread Richard Hector

On 6/03/22 22:20, to...@tuxteam.de wrote:

On Sun, Mar 06, 2022 at 09:34:36AM +0100, Christian Britz wrote:



On 2022-03-06 09:30 UTC+0100, Richard Owlett wrote:

>> apt-get --no-install-recommends install mate-desktop-environment
> When I attempted to run startx I received the message
>> startx: command not found

Hi Richard,

I can't tell you anything about the dependencies but you could try to
install xinit package. This contains the startx command.


(Thanks, Christian, for the fish. Now I'll try to sell the rod ;-)

Reminder (paste this on a sticky note on your workshop wall :)

   apt-file search startx


Just to solve the infinite recursion problem:

richard@zircon:~$ apt-file search bin/apt-file
apt-file: /usr/bin/apt-file

so install the apt-file package :-)

Cheers,
Richard



cups/avahi-daemon - worrying logs

2022-03-07 Thread Richard Hector

Hi all,

I've recently set up a small box to run cups, to provide network access 
to a USB-only printer. It's a 32-bit machine running bullseye.


I'm seeing log messages like these:


Mar  7 15:47:47 whio avahi-daemon[310]: Record 
[Brother\032HL-2140\032\064\032whio._ipps._tcp.local#011IN#011SRV 0 0 631 
whio.local ; ttl=120] not fitting in legacy unicast packet, dropping.
Mar  7 15:47:47 whio avahi-daemon[310]: Record [whio.local#011IN#011 
fe80::3e4a:92ff:fed3:9e16 ; ttl=120] not fitting in legacy unicast packet, 
dropping.
Mar  7 15:47:48 whio avahi-daemon[310]: Record 
[Brother\032HL-2140\032\064\032whio._ipp._tcp.local#011IN#011SRV 0 0 631 
whio.local ; ttl=120] not fitting in legacy unicast packet, dropping.
Mar  7 15:47:48 whio avahi-daemon[310]: Record [whio.local#011IN#011 
fe80::3e4a:92ff:fed3:9e16 ; ttl=120] not fitting in legacy unicast packet, 
dropping.


Those link-local IPv6 addresses belong to the machine itself. It 
currently has no other IPv6 address(es) (other than loopback), but I 
should probably set that up.


Any hints as to what's going on?

Most of the hits I get from a web search are full of 'me too' with no 
answers.


Cheers,
Richard



Re: systemd user@###.service failure causing 90 sec delays during boot, login

2022-03-01 Thread Richard Hector

On 1/03/22 12:05, Greg Wooledge wrote:

On Mon, Feb 28, 2022 at 10:28:49PM +, KCB Leigh wrote:

This operating system has worked excellently for months, but for the last 2 
days has suddenly been taking a very long time to boot.  The cause of the delay 
can be seen from the syslog:


Obvious question 1: what changed 2 days ago?


Apologies for replying to the wrong message; I've already deleted the 
older ones.


This reminded me of a problem I had a couple of months ago, where it 
took a long time to log in. I notice also you'd just installed ufw for 
firewalling.


My problem turned out to be that starting the user@xxx.service requires 
a network connection on the loopback interface - I was experimenting 
with nftables and had neglected to allow that.


Check your firewall for loopback connections?

Cheers,
Richard



Re: Wrong libvirt version in bullseye installation

2022-02-07 Thread Richard Hector

On 8/02/22 11:34, Gary L. Roach wrote:
I have been trying to get a cleen copy of qemu/kvm installed but when I 
try to install qemu-system I get:


     libvirt-clients : Depends: libvirt0 (= 7.0.0-3) but 8.0.0-1~bpo11+1 
is to be installed.


  The same for libvirt-daemon and some others. Version 7.0.0.3 is listed 
as the version used in Bullseye. The 8.0.0-1 version is used in the 
testing version. I have gone through my sources.list files and can not 
find the reason libvirt is trying to be loaded from the testing 
depository. Could anyone help? I have tried clearing the /var/cache 
files. That didn't help.


The bpo bit in the version suggests that it's coming from the backports 
repo (bullseye-backports), not testing as such. I assume you have the 
backports repo configured. Why you're getting that particular backport 
if you haven't installed it manually, I'm not sure - perhaps you 
installed or are installing something else that depends on it? If 
backports is what you want, it appears libvirt-clients is available from 
there as well, so you can install a matching version.


Cheers,
Richard



Re: Security

2022-02-01 Thread Richard Hector

On 2/02/22 00:26, Vincent Lefevre wrote:

On 2022-01-31 01:36:06 +1300, Richard Hector wrote:

On 29/01/22 04:17, Vincent Lefevre wrote:
> Servers shouldn't have pkexec installed in the first place, anyway.

libvirt-daemon-system depends on policykit-1.

Should that not be on my (kvm) server either?


I don't need libvirt-daemon-system on my server. And I don't see
why it would be needed in general. If I understand correctly,
libvirt is used to manage VMs, but what is mostly exposed on the
Internet (e.g. as a web server) is the VM itself, which doesn't
need libvirt.


I guess it depends how you define a 'server'. I include the machine that 
hosts my VMs. And I certainly don't restrict it to what's exposed on the 
Internet.


I admit I haven't explored in depth exactly which bits of libvirt are 
required on the VM host; I rely to some extent on the recommendations in 
the packages.


Cheers,
Richard



Re: Security

2022-01-30 Thread Richard Hector

On 29/01/22 04:17, Vincent Lefevre wrote:


Servers shouldn't have pkexec installed in the first place, anyway.



libvirt-daemon-system depends on policykit-1.

Should that not be on my (kvm) server either?

Cheers,
Richard



Re: cooperative.co.uk has address 127.0.0.1

2022-01-19 Thread Richard Hector

On 19/01/22 04:08, Andrew M.A. Cater wrote:


So - the Cooperative Society - is at https://www.coop.co.uk


Oddly, when I searched for "Co-operative Group Limited" (which I got 
from whois), I found a different site: https://co-operative.coop


It seems to be the same people, but a totally independent site.


It's quite possible that the 127.0.0.1 is genuine to prevent someone else using 
it and that their webserve rand DNS provision  will redirect cooperative.co.uk
to coop.co.uk in due course.


AFAIK a domain doesn't have to have an A record at all. Or any records 
for that matter; having it registered should be enough to save it.


Why not just redirect it now? Maybe they can't decide which of the above 
sites is the right one :-)


Cheers,
Richard



Re: Single broken package blocks whole package management

2022-01-05 Thread Richard Hector

On 6/01/22 02:32, Urs Thuermann wrote:

After an dist-upgrade from Raspian 8 (jessie) to 9.13 (stretch)
hundreds of packages still need to be upgraded and aptitude reports
numerous conflicts.


Firstly, the standard response is that Raspbian is not Debian :-) There 
are differences which might be related to your problem.



I first wanted to upgrade everything which doesn't causes any
conflicts, which fails because of problems in wolfram-engine:


wolfram-engine appears not to be a debian package, for starters.

[snip]


 Unescaped left brace in regex is deprecated, passed through in regex; marked by 
<-- HERE in m/^(.*?)(\\)?\${ <-- HERE ([^{}]+)}(.*)$/ at 
/usr/share/perl5/Debconf/Question.pm line 72.
 Unescaped left brace in regex is deprecated, passed through in regex; marked by 
<-- HERE in m/\${ <-- HERE ([^}]+)}/ at /usr/share/perl5/Debconf/Config.pm line 
30.


Looks bad, but deprecation isn't an error.

[snip]


 dpkg: unrecoverable fatal error, aborting:
  files list file for package 'wolfram-engine' contains empty filename
 E: Sub-process /usr/bin/dpkg returned an error code (2)
 Failed to perform requested operation on package.  Trying to recover:


Ah. So perhaps a bug in that package. Or it's corrupted.

If you look at /var/lib/dpkg/info/wolfram-engine.list does it have an 
empty line in it? What happens if you edit that out?




 root@uranus:~#

I tried to remove packages wolfram-engine and wolframscript, also
tried to remove debconf-utils, but everything fails with the same
error message:

 root@uranus:~# dpkg --force-all -P wolfram-engine
 dpkg: unrecoverable fatal error, aborting:
  files list file for package 'wolfram-engine' contains empty filename


Same thing



Also, upgrading a single package that's completely unrelated, is not
possible:

 root@uranus:~# aptitude install acl
 The following packages will be upgraded:
   acl libacl1
 The following partially installed packages will be configured:
   debconf-utils{b}
 2 packages upgraded, 0 newly installed, 0 to remove and 1005 not upgraded.
 Need to get 0 B/80.7 kB of archives. After unpacking 49.2 kB will be used.
 The following packages have unmet dependencies:
  debconf-utils : Depends: debconf (= 1.5.61) but 1.5.56+deb8u1 is 
installed and it is kept back.
 The following actions will resolve these dependencies:

  Remove the following packages:
 1) debconf-utils



 Accept this solution? [Y/n/q/?]
 The following packages will be REMOVED:
   debconf-utils{a}
 The following packages will be upgraded:
   acl libacl1
 2 packages upgraded, 0 newly installed, 1 to remove and 1005 not upgraded.
 Need to get 0 B/80.7 kB of archives. After unpacking 58.4 kB will be freed.
 Do you want to continue? [Y/n/?]
 Reading changelogs... Done
 Unescaped left brace in regex is deprecated, passed through in regex; marked by 
<-- HERE in m/^(.*?)(\\)?\${ <-- HERE ([^{}]+)}(.*)$/ at 
/usr/share/perl5/Debconf/Question.pm line 72.
 Unescaped left brace in regex is deprecated, passed through in regex; marked by 
<-- HERE in m/\${ <-- HERE ([^}]+)}/ at /usr/share/perl5/Debconf/Config.pm line 
30.
 dpkg: unrecoverable fatal error, aborting:
  files list file for package 'wolfram-engine' contains empty filename
 E: Sub-process /usr/bin/dpkg returned an error code (2)
 Failed to perform requested operation on package.  Trying to recover:


Still comes back to the same thing.


 root@uranus:~#
 
What else can I do to get the package management working again?


I'd try deleting any blank lines from that file, and trying again.

Or maybe apt-get install --reinstall wolfram-engine

Or ask on a Raspbian list :-)

A related question here:

https://unix.stackexchange.com/questions/425355/x11-common-contains-empty-filename

Cheers,
Richard



Re: Thunderbird not allowing local accounts

2022-01-05 Thread Richard Hector

On 6/01/22 02:35, Paul M. Foster wrote:

Folks:

I just restarted my machine, and am using Thunderbird 91.4.1 (the 
latest) 64 bit on Debian 11. I didn't reinstall Thunderbird or upgrade 
it. Before I restarted the machine, I had a Thunderbird email account 
for local emails, which grabbed email from my /var/mail/paulf folder. 
Now that account doesn't show up in Thunderbird, and I'm unable to 
create an account like that (one which grabs mail from a local folder). 
The dialogs which used to be there allowing you to create a localhost 
mbox account are gone. I've verified the (complicated) procedure for 
doing this on the Internet, and the dialogs shown are no longer in 
Thunderbird. I am unable to create a localhost email account in 
Thunderbird.


Any help? Did Thunderbird make some change I don't know about?

Paul



Looks like it :-(

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=993526
https://bugzilla.mozilla.org/show_bug.cgi?id=1625741

Richard



Re: reportbug fail

2021-11-21 Thread Richard Hector

On 21/11/21 3:04 am, Lee wrote:

I wanted to create a bug report for meld but couldn't find any info on
how to other than "use reportbug" :(


I see your problem is solved, but for future reference, this page has 
info on reporting bugs via email:


https://www.debian.org/Bugs/Reporting

Cheers,
Richard



Re: question from total newbie. a little help please

2021-10-28 Thread Richard Hector

On 18/10/21 2:55 am, john doe wrote:

With W10 you have also the possibility of using 'WLS' an order
alternative would be to install Debian as a VM.


I think perhaps you mean WSL - Windows Subsystem for Linux?

https://docs.microsoft.com/en-us/windows/wsl/install

I've never used it myself.

Richard



Re: [Sid] Firefox problem

2021-10-28 Thread Richard Hector

On 17/10/21 9:55 pm, Grzesiek wrote:

Hi there,

On some of machines I use, after opening of Firefox I get empty browser 
window (with menus, decorations etc) but nothing else is displayed. Its 
impossible to open menu, type address, etc. The only thing you can do is 
to close the window. After changing display configuration (rotate to 
portrait, adding external monitor..) it starts to work as expected. You 
do not even need to reopen. Moreover, it looks that Firefox was running 
ok all the time but nothing was displayed.
After recent updates on some machines I get the same problem using 
firefox-esr.

The only error mesg I get is:
###!!! [Parent][RunMessage] Error: Channel closing: too late to 
send/recv, messages will be lost


Are you seeing that message in the shell that you started it from? If 
not, and if you're not running it in a shell, try that to see if there 
are more messages?


Cheers,
Richard



Re: replacement of sqsh for debian 11

2021-10-28 Thread Richard Hector

On 28/10/21 3:05 pm, Greg Wooledge wrote:

Nobody could figure out that you were trying to connect
to an existing proprietary database.


Well, I did. Because that's what sqsh is for - it's a client, not a DBMS.

But I guess it could have been clearer.

Cheers,
Richard



Re: buggy N-M (was: Debian 11: Unable to detect wireless interface on an old laptop) computer

2021-09-28 Thread Richard Hector
This isn't really a good place to chip in, but the best I can find from 
the messages I haven't deleted ...


On 29/09/21 2:00 am, Henning Follmann wrote:

My comment to the OP was basically on the nebulous source (most VPN Providers)
and the generalized categorization (N-M is buggy), which I disagree with.


My own problems with NM, which may be related, seem to be shared with 
many in the 'OpenVPN community'.


It seems that for configuring OpenVPN, NM does its own thing, and mostly 
ignores the 'standard' configuration files etc. that are covered by the 
OpenVPN documentation. Even some of the terminology seems to be different.


That makes it very difficult to match up the changes I might make on my 
server to those that are needed on the (NM-managed) client. I had it 
working on my buster laptop, but with a reinstall of bullseye, combined 
with some changed defaults in the OpenVPN setup, I ended up giving up - 
I now start my VPN with systemctl rather than clicking through NM. I 
should probably write a script rather than trying to remember how vpns 
map to systemctl unit targets (?), but I don't use it very often, and 
have moved on ...


I feel the NM maintainers could do more to talk to the OpenVPN folks, 
and maybe provide some tools to import standard configs, or generate 
something that NM can consume? Maybe vice-versa too. I don't know where 
those responsibilities lie.


Cheers,
Richard



Re: silence audio on locked screen?

2021-09-28 Thread Richard Hector

On 28/09/21 11:33 pm, Dan Ritter wrote:

Richard Hector wrote:

On 27/09/21 11:39 pm, Dan Ritter wrote:
> 
> One option is to run a mute and stop-playing command immediately

> on screensaver interaction.
> 
> For XFCE4, that's as easy as adding a panel object which runs an

> application, pointing that at a script, and adding an
> appropriate icon. Install xmacro.
> 
> ~/bin/quiet-and-dark
> 
> #!/bin/sh

> #not actually tested
> echo 'KeyStrPress XF86AudioPlay KeyStrRelease XF86AudioPlay' | xmacroplay :0
> echo 'KeyStrPress XF86AudioMute KeyStrRelease XF86AudioMute' | xmacroplay :0
> xscreensaver -command activate
> 
> 
> You can also assign it to run as a keyboard shortcut.


Thanks Dan,

If I understand correctly,  you're suggesting to create a clickable button
which will mute the audio, and then creating a macro to do that from within
a script, which I then need to run manually?


That sounds inverted. The button executes a script when you
click it; the script mutes and pauses audio, then activates the
screensaver.

There might be a way to invoke the mute-and-pause from the
screensaver when it activates by itself, but I don't know that
one and a few minutes searching didn't reveal it.


I'd like this to still happen if the screen locks due to inactivity. I
haven't found yet what triggers that, or where to configure the timeout.


That's in Settings, Screensaver. Try right clicking on an empty
area of desktop.


Secondly, will it re-enable audio when the screen is unlocked?


This won't, but the same invocations without the final
screensaver activation will un-mute and start playing whatever
is listening to XF86 media keys.


Thanks Dan, I think I understand some of that.

However, I'm reluctant to embark on one-off efforts, partly because I 
don't understand enough of the underlying structure, and partly because 
it only solves it for me (if I understood more, maybe I could contribute 
back, but I don't).


As I say, I consider this to be a security flaw - people can hear 
something of what my computer's doing when I'm not there and it's 
supposedly locked.


Do others agree with that?

Also, I don't know how much of the screen locking function is shared 
between the many tools that are available for this purpose. Ideally, 
this problem (if it's a problem) should be fixed in all such tools.


Does it sound reasonable then to submit a bug report to light-locker, 
with the suggestion that the maintainers contact the maintainers of 
similar/related packages as they see fit?


Cheers,
Richard



Re: silence audio on locked screen?

2021-09-28 Thread Richard Hector

On 27/09/21 11:39 pm, Dan Ritter wrote:

Richard Hector wrote:

I'm using buster with xfce4, pulseaudio, and (I think) light-locker.

When I lock my screen, audio continues to play (and system sounds are still
heard).

This seems to me like a way to leak information, and is also annoying to
anyone nearby. It's then annoying for me when I discover somebody has
unplugged my headphones to make them shut up :-)

Any suggestions for making it be quiet? Perhaps a wishlist bug for
light-locker? I don't know if it's even feasible, given the various
combinations of audio system and screen lockers.


One option is to run a mute and stop-playing command immediately
on screensaver interaction.

For XFCE4, that's as easy as adding a panel object which runs an
application, pointing that at a script, and adding an
appropriate icon. Install xmacro.

~/bin/quiet-and-dark

#!/bin/sh
#not actually tested
echo 'KeyStrPress XF86AudioPlay KeyStrRelease XF86AudioPlay' | xmacroplay :0
echo 'KeyStrPress XF86AudioMute KeyStrRelease XF86AudioMute' | xmacroplay :0
xscreensaver -command activate


You can also assign it to run as a keyboard shortcut.


Thanks Dan,

If I understand correctly,  you're suggesting to create a clickable 
button which will mute the audio, and then creating a macro to do that 
from within a script, which I then need to run manually?


I see two issues there, which were admittedly not in my original 
statement of requirements :-)


I'd like this to still happen if the screen locks due to inactivity. I 
haven't found yet what triggers that, or where to configure the timeout.


Secondly, will it re-enable audio when the screen is unlocked?

Cheers,
Richard



silence audio on locked screen?

2021-09-26 Thread Richard Hector

Hi all,

I'm using buster with xfce4, pulseaudio, and (I think) light-locker.

When I lock my screen, audio continues to play (and system sounds are 
still heard).


This seems to me like a way to leak information, and is also annoying to 
anyone nearby. It's then annoying for me when I discover somebody has 
unplugged my headphones to make them shut up :-)


Any suggestions for making it be quiet? Perhaps a wishlist bug for 
light-locker? I don't know if it's even feasible, given the various 
combinations of audio system and screen lockers.


Cheers,
Richard



Re: copy directory tree, mapping to new owners

2021-09-14 Thread Richard Hector

On 14/09/21 6:50 pm, to...@tuxteam.de wrote:

On Tue, Sep 14, 2021 at 12:17:05PM +1200, Richard Hector wrote:

On 13/09/21 7:04 pm, to...@tuxteam.de wrote:
>On Mon, Sep 13, 2021 at 11:45:02AM +1200, Richard Hector wrote:
>>On 12/09/21 6:52 pm, john doe wrote:
>
>[...]
>
>>>If you are doing this in a script, I would use a temporary directory.
>>>That way, in case of failure the destination directory is not rongly
>>>modified.
>>>
>>>EG:
>>>
>>>$ rsync  
>>>
>>>Make  the way you want it to be.
>>>
>>>$ rsync  
>>
>>That is true, but firstly it would require more available space [...]
>
>This isn't necessary, as you could replace the second `rsync' by a `mv'
>(provided your temp tree is on the same storage volume as your target
>dir, that is).

I was assuming the suggestion was to rsync the source to the temp
while the destination still exists, before rsyncing or mv'ing over
the top of it. Total of 3 copies (temporarily) rather than 2.


Then, it's different. But in your scenario you would probably want
to take down whatever "service" relies on the destination dir while
the copy is in progress.


This is all academic, since rsync with --usermap and --groupmap does 
what I want, in place.


But john doe's proposal had the rationale of "That way, in case of 
failure the destination directory is not rongly modified."


That implies the destination is staying put. Well, I guess deleting it 
entirely avoids "wrongly modifying" it too :-)



In any case, if you haven't the space, you haven't it. Sysadmin's
life ain't always nice :)


It's available if I want it; everything is on LVM. It's easy to grow. 
It's also all on a VPS, so expanding the total is a matter of tweaking 
the settings in a control panel. But it's harder to shrink (especially 
since I use xfs), so I prefer not to grow it if it's not necessary.


Cheers :-)
Richard



Re: copy directory tree, mapping to new owners

2021-09-14 Thread Richard Hector

On 13/09/21 7:04 pm, to...@tuxteam.de wrote:

On Mon, Sep 13, 2021 at 11:45:02AM +1200, Richard Hector wrote:

On 12/09/21 6:52 pm, john doe wrote:


[...]


>If you are doing this in a script, I would use a temporary directory.
>That way, in case of failure the destination directory is not rongly
>modified.
>
>EG:
>
>$ rsync  
>
>Make  the way you want it to be.
>
>$ rsync  

That is true, but firstly it would require more available space [...]


This isn't necessary, as you could replace the second `rsync' by a `mv'
(provided your temp tree is on the same storage volume as your target
dir, that is).


I was assuming the suggestion was to rsync the source to the temp while 
the destination still exists, before rsyncing or mv'ing over the top of 
it. Total of 3 copies (temporarily) rather than 2.


Cheers,
Richard



Re: copy directory tree, mapping to new owners

2021-09-12 Thread Richard Hector

On 12/09/21 7:46 pm, Teemu Likonen wrote:

* 2021-09-12 12:43:29+1200, Richard Hector wrote:


The context of my question is that I'm creating (or updating) a test
copy of a website. The files are owned by one of two owners, depending
on whether they were written by the server (actually php-fpm).

To do that, I want all the permissions to remain the same, but the
ownership should be changed according to a provided map.


Looks exactly like what "rsync --usermap=FROM:TO" can do. There is also
"--groupmap" option for mapping groups.


Aha - thanks a lot :-) I guess I've been caught not reading the man page 
thoroughly enough. The pitfalls of thinking that you know everything 
that a tool does, just because you use it often ...


The way the docs are written seems to imply a sender and receiver; I'll 
have to check it works for a local copy ... it does.


Now I need to rewrite my script somewhat, since to do this it's going to 
have to run as root ...


Thanks,
Richard



Re: copy directory tree, mapping to new owners

2021-09-12 Thread Richard Hector

On 12/09/21 6:53 pm, l0f...@tuta.io wrote:


# actually not necessary? rsync will create it
mkdir -p mysite_test/doc_root


You can make a simple test to know that but I would say that rsync doesn't create your 
destination "root" directory (the one you specify on the command line) unless 
`--mkpath` is used.


I was fairly confident of that, from my regular usage.


# The trailing / matters. Does it matter on the source as well?
# I generally include it.
rsync -a mysite/doc_root/ mysite_test/doc_root/  # The trailing / matters.


Actually, I'm not sure to understand Greg's remark here.

In my opinion, trailing slash doesn't matter for destination folder on the 
contrary of *source* folder.

In other words, for me, the following are equal:
rsync -a mysite/doc_root/ mysite_test/doc_root/
rsync -a mysite/doc_root/ mysite_test/doc_root

But not the following:
rsync -a mysite/doc_root mysite_test/doc_root => you will get an extra 
"doc_root" folder (the source one) in your dest, i.e. : 
mysite_test/doc_root/doc_root and then the content of doc_root source
rsync -a mysite/doc_root/ mysite_test/doc_root => your doc_root (destination) 
folder will get doc_root content (source) directly


Yep. As I've mentioned elsewhere, I habitually include trailing slashes 
for both source and destination when I'm replicating a whole tree. I 
couldn't remember the details of what happens when you don't, but I know 
what happens when I do :-)


Cheers,
Richard



Re: copy directory tree, mapping to new owners

2021-09-12 Thread Richard Hector

On 12/09/21 6:52 pm, john doe wrote:

On 9/12/2021 3:45 AM, Richard Hector wrote:




Thanks, that looks reasonable. It does mean, though, that the files
exist for a while with the wrong ownership. That probably doesn't
matter, but somehow 'feels wrong' to me.



If you are doing this in a script, I would use a temporary directory.
That way, in case of failure the destination directory is not rongly
modified.

EG:

$ rsync  

Make  the way you want it to be.

$ rsync  


That is true, but firstly it would require more available space, and 
secondly, as long as I know about the failure, it doesn't worry me too 
much. This is a script I will call manually.



Given that you want to change the ownership, you may want to emulate the
options implied by '-a' but without the ownership option ('-o').


I do that in my existing script (also not -g (group) or -D 
(specials/devices (which don't exist there anyway)) - and sometimes -p 
(permissions), if I've pre-created the tree with the permissions I want. 
The last one is a slightly different use case, but I should make sure I 
know which use case I'm trying for ...



My habits would also lead me to do all of the above from above the two
directories in question (in my case, /srv - for /srv/mysite/doc_root and
/srv/mysite_test/doc_root) So:

cd /srv

# actually not necessary? rsync will create it
mkdir -p mysite_test/doc_root

# The trailing / matters. Does it matter on the source as well?


According to the man page it does (1):

"CWrsync -avz foo:src/bar/ /data/tmp
A trailing slash on the source changes this behavior to avoid creating
an additional directory level at the destination. You can think of a
trailing / on a source as meaning lqcopy the contents"


# I generally include it.


In a script, you need to be sure what the cmd does do or not do!!! :)


True. But many options I've learned from my everyday usage, and 
sometimes forgotten the rationale - I just know it does what I want :-)




rsync -a mysite/doc_root/ mysite_test/doc_root/  # The trailing /
matters.

find mysite_dest -user mysite -exec chown mysite_test {} +

# I prefer mysite_test-run; it keeps consistency with
# the ${sitename}-run pattern used elsewhere
find mysite_dest -user mysite-run -exec chown mysite_test-run {} +

Have I broken anything there?



Not that I can see but testing will tell!


This is what I would do.  And I would do it *interactively*.

If you insist on making a script, then it will be slightly more
complicated, because you'll need to add error checking.


The trouble with doing it interactively, when it needs to be done many
times (and on several sites), is that each time there's opportunity to
make a mistake. And it means the process needs to be documented
separately from the script.

In fact, I'd incorporate the above in a larger script, which does things
like copying the database (and changing the db name in the site config).

Error checking in shell scripts is something I certainly need to learn


At the very least, ' ['&&'|'||'] handle the error>.

If you use Bash and don't need to be POSIX compliant and /or portable
you can be a bit more creative.


Heh - you've cut my sentence in two, which appears to change the meaning 
slightly :-)


" ... is something I need to learn and practice more".

You make it look like I need to learn error checking from scratch :-)

I am using bash. And I am currently using 'set -e' to handle that kind 
of error globally. As I say, as long as I can see that it broke, it 
doesn't matter too much.



and practice more - I tend to rely somewhat on 'knowing' what I'm
working with, which is probably not a good idea.

 >

Yes in a script it is like shooting yourself in the foot.


I do test it.

Cheers,
Richard



Re: copy directory tree, mapping to new owners

2021-09-11 Thread Richard Hector

On 12/09/21 12:52 pm, Greg Wooledge wrote:

On Sun, Sep 12, 2021 at 12:43:29PM +1200, Richard Hector wrote:

The context of my question is that I'm creating (or updating) a test copy of
a website. The files are owned by one of two owners, depending on whether
they were written by the server (actually php-fpm).

To do that, I want all the permissions to remain the same, but the ownership
should be changed according to a provided map. For example, if the old file
was owned by 'mysite', the copy should be owned by 'mysite_test'. If the old
file was owned by 'mysite-run' (the user php runs as), the copy should be
owned by 'mysite_test-run' (if that has to be 'mysite-run_test' to make
things easier, I can live with that).


cd /src
mkdir -p /dest
rsync -a . /dest/  # The trailing / matters.
cd /dest
find . -user mysite -exec chown mysite_test {} +
find . -user mysite-run -exec chown mysite-run_test {} +


Thanks, that looks reasonable. It does mean, though, that the files 
exist for a while with the wrong ownership. That probably doesn't 
matter, but somehow 'feels wrong' to me.


My habits would also lead me to do all of the above from above the two 
directories in question (in my case, /srv - for /srv/mysite/doc_root and 
/srv/mysite_test/doc_root) So:


cd /srv

# actually not necessary? rsync will create it
mkdir -p mysite_test/doc_root

# The trailing / matters. Does it matter on the source as well?
# I generally include it.
rsync -a mysite/doc_root/ mysite_test/doc_root/  # The trailing / 
matters.


find mysite_dest -user mysite -exec chown mysite_test {} +

# I prefer mysite_test-run; it keeps consistency with
# the ${sitename}-run pattern used elsewhere
find mysite_dest -user mysite-run -exec chown mysite_test-run {} +

Have I broken anything there?


This is what I would do.  And I would do it *interactively*.

If you insist on making a script, then it will be slightly more
complicated, because you'll need to add error checking.


The trouble with doing it interactively, when it needs to be done many 
times (and on several sites), is that each time there's opportunity to 
make a mistake. And it means the process needs to be documented 
separately from the script.


In fact, I'd incorporate the above in a larger script, which does things 
like copying the database (and changing the db name in the site config).


Error checking in shell scripts is something I certainly need to learn 
and practice more - I tend to rely somewhat on 'knowing' what I'm 
working with, which is probably not a good idea.


Thanks,
Richard



copy directory tree, mapping to new owners

2021-09-11 Thread Richard Hector

Hi all,

The context of my question is that I'm creating (or updating) a test 
copy of a website. The files are owned by one of two owners, depending 
on whether they were written by the server (actually php-fpm).


To do that, I want all the permissions to remain the same, but the 
ownership should be changed according to a provided map. For example, if 
the old file was owned by 'mysite', the copy should be owned by 
'mysite_test'. If the old file was owned by 'mysite-run' (the user php 
runs as), the copy should be owned by 'mysite_test-run' (if that has to 
be 'mysite-run_test' to make things easier, I can live with that).


Group ownership is or would be the same, but in fact it's simpler 
because both users are members of teh same group - all files are or 
should be group-owned by the same group (mysite, mapping to mysite_test).


Is there any pre-existing tool that will do this? Or will I need to 
write a perl script or similar?


What I've done in the past is use the same users for both production and 
testing, and do the copy by running rsync as the mysite user, but 
firstly I'd rather have more isolation between the two, secondly the 
mysite user might not be able to read all the mysite-run files, and 
thirdly the ownership of those (mysite-run) files gets changed, making 
it an imperfect copy.


Thanks in advance,
Richard



Re: Trouble upgrading Debian (reply to David Wright)

2021-09-07 Thread Richard Hector

On 7/09/21 5:25 am, John Hasler wrote:

Curt writes:
I suggest you follow the earlier advice, and set Thunderbird 
to compose your email as plain text


Curt didn't write that; I did. Please be careful with your attributions.

I'm intrigued to know how this mistake happened, however. Were you 
perhaps replying to a digest message instead of a normal individual one?


Cheers,
Richard



Re: Trouble upgrading Debian (reply to David Wright)

2021-09-06 Thread Richard Hector

On 6/09/21 1:20 pm, Dedeco Balaco wrote:

3. Tried to do 'apt update' as root, but it does not work. GPG signature
error.


21:18:54 [ 0] root@compo: /etc/apt # apt-mark hold firefox-esr
firefox-esr-l10n-pt-br thunderbird thunderbird-l10n-pt-br firefox-esr
set on hold. firefox-esr-l10n-pt-br set on hold. thunderbird set on
hold. thunderbird-l10n-pt-br set on hold. 21:46:26 [ 0] root@debian:
/etc/apt # apt update Get:1http://security.debian.org  buster/updates
InRelease [65.4 kB] Err:1http://security.debian.org  buster/updates
InRelease The following signatures couldn't be verified because the
public key is not available: NO_PUBKEY 112695A0E562B32A NO_PUBKEY
54404762BBB6E853 Get:2http://deb.debian.org/debian  buster InRelease
[122 kB] Get:3http://deb.debian.org/debian  buster-updates InRelease
[51.9 kB] Err:3http://deb.debian.org/debian  buster-updates InRelease
The following signatures couldn't be verified because the public key
is not available: NO_PUBKEY 648ACFD622F3D138 NO_PUBKEY
0E98404D386FA1D9 Get:4http://deb.debian.org/debian  buster/main
Sources [7,836 kB] Get:5http://deb.debian.org/debian  buster/main
amd64 Packages [7,907 kB] Get:6http://deb.debian.org/debian
buster/main i386 Packages [7,863 kB] Get:7
http://deb.debian.org/debian  buster/main Translation-pt_BR [683 kB]
Get:8http://deb.debian.org/debian  buster/main Translation-en [5,968
kB] Get:9http://deb.debian.org/debian  buster/main Translation-pt [309
kB] Get:10http://deb.debian.org/debian  buster/main amd64 Contents
(deb) [37.3 MB] Get:11http://deb.debian.org/debian  buster/main i386
Contents (deb) [37.3 MB] Reading package lists... Done W: GPG error:
http://security.debian.org  buster/updates InRelease: The following
signatures couldn't be verified because the public key is not
available: NO_PUBKEY 112695A0E562B32A NO_PUBKEY 54404762BBB6E853 E:
The repository 'http://security.debian.org  buster/updates InRelease'
is not signed. N: Updating from such a repository can't be done
securely, and is therefore disabled by default. N: See apt-secure(8)
manpage for repository creation and user configuration details. W: GPG
error:http://deb.debian.org/debian  buster-updates InRelease: The
following signatures couldn't be verified because the public key is
not available: NO_PUBKEY 648ACFD622F3D138 NO_PUBKEY 0E98404D386FA1D9
E: The repository 'http://deb.debian.org/debian  buster-updates
InRelease' is not signed. N: Updating from such a repository can't be
done securely, and is therefore disabled by default. N: See
apt-secure(8) manpage for repository creation and user configuration
details. 21:46:50 [ 0] root@debian: /etc/apt #


As you can see, the plain text version of your email is not very 
readable. I suggest you follow the earlier advice, and set Thunderbird 
to compose your email as plain text, even if only for list mail.


Cheers,
Richard



Re: which vs. type, and recursion?

2021-09-04 Thread Richard Hector

On 4/09/21 9:26 pm, Brian wrote:

On Sat 04 Sep 2021 at 21:21:38 +1200, Richard Hector wrote:


Greg Wooledge pointed out in another thread that 'type' is often better than
'which' for finding out what kind of command you're about to run, and where
it comes from.

A quick test, however, threw up another issue:

richard@zircon:~$ type ls
ls is aliased to `ls --color=auto'

Great, so it's an alias. But what is the underlying ls? How do I find out? I
did find out, by unaliasing ls and trying again, which showed that it's an
actual executable, /usr/bin/ls, and not a shell builtin.

But is there an easier/better way? Can 'type' be asked to recursively decode
aliases?

I looked at the relevant section of bash(1) (when I eventually found it),
but was not particularly enlightened.


Use 'help type' and try 'type -a ls'.



That ('help type') is much more readable than bash(1), thanks. I think I 
might have known about 'help', but had forgotten ...


Cheers,
Richard



Re: Tips/advice for installing latest version of fzf?

2021-09-04 Thread Richard Hector

On 1/09/21 3:32 am, Greg Wooledge wrote:

In bash, which is *not* a shell builtin -- it's a separate program,
/usr/bin/which.


Well _that_ took a while to parse correctly :-) I know bash is not a 
shell builtin, that would be weird ...


Cheers,
Richard



which vs. type, and recursion?

2021-09-04 Thread Richard Hector
Greg Wooledge pointed out in another thread that 'type' is often better 
than 'which' for finding out what kind of command you're about to run, 
and where it comes from.


A quick test, however, threw up another issue:

richard@zircon:~$ type ls
ls is aliased to `ls --color=auto'

Great, so it's an alias. But what is the underlying ls? How do I find 
out? I did find out, by unaliasing ls and trying again, which showed 
that it's an actual executable, /usr/bin/ls, and not a shell builtin.


But is there an easier/better way? Can 'type' be asked to recursively 
decode aliases?


I looked at the relevant section of bash(1) (when I eventually found 
it), but was not particularly enlightened.


Cheers,
Richard



Re: How to update Debian 11 source.list to testing?

2021-09-03 Thread Richard Hector

On 4/09/21 2:17 am, Roberto C. Sánchez wrote:

You might consider using bookwork rather than testing, however.


Or bookworm, even.

Richard



Re: explanation of first column "v" is hiding

2021-07-27 Thread Richard Hector

On 28/07/21 7:55 am, Greg Wooledge wrote:

https://bugs.debian.org/991578


Nice.

I looked at the patch, but I'm not familiar with what processing gets 
done on that code.


Does your reference to the reference manual, in the last  of the 
diff, get expanded to tell me where to find the reference manual? Is 
that feasible?


Cheers,
Richard



Re: location of screenshots during debian install

2021-07-27 Thread Richard Hector

On 27/07/21 7:14 pm, Jupiter777 wrote:



hello,

I am in the middle of installing buster 10.10.x on my computer.

I see that I can take screenshots as the dialog boxes tell me:

   Screenshot Saved as /var/log/


But /var/log is not on the bootable  usb I am using ...

Where are the screenshots?  I like to  use them for troubleshooting?


I've never used this facility, so I'm only guessing.

But if they're not on the installer media, then they're presumably on 
the disk you're installing to, which is mounted on /target/ during the 
installation - so /target/var/log/.


Whether and how you can get them off that disk if you haven't finished 
the installation is a different matter, of course :-)


Richard



Re: explanation of first column "v" is hiding

2021-07-26 Thread Richard Hector

On 27/07/21 5:22 am, Greg Wooledge wrote:

P.S. If we're complaining about the lack of documentation for the cryptic
output of the Debian tool set, can we say some words about aptitude?
Seriously.

This command searches for packages that require or conflict with
the given package. It displays a sequence of dependencies leading
to the target package, along with a note indicating the installed
state of each package in the dependency chain:

$ aptitude why kdepim
i   nautilus-data Recommends nautilus
i A nautilus  Recommends desktop-base (>= 0.2)
i A desktop-base  Suggests   gnome | kde | xfce4 | wmaker
p   kde   Dependskdepim (>= 4:3.4.3)

What do *any* of those column-ish letters mean?  I can guess "i", maybe,
but not "A" or "p".  (I might have guessed "purged" for "p", but that
doesn't seem to fit the picture being painted by the example, which is
of a system that *does* have KDE installed.  In any case, why should I
have to guess these things?)


My main issue with aptitude documentation is that most of it isn't in 
the manpage, but in the 'aptitude reference manual' which is referred to 
without a link. The path given in the SEE ALSO section might be that, 
but it doesn't say so.


But experience suggests that A means 'automatically installed' (and p 
stands for purged, which linguistically doesn't really mean 'maybe has 
been purged; maybe has never been installed').


Cheers,
Richard



Re: How do I mount the USB stick containing the installer in Rescue Mode?

2021-07-21 Thread Richard Hector

On 21/07/21 11:39 pm, Greg Wooledge wrote:

No, a bind mount doesn't take a device name as an argument.  It takes
two directory names.  From the man page:

mount --bind|--rbind|--move olddir newdir

It's used when you've already got the device mounted somewhere (the first
directory), and you'd also like it to appear in a second place.


That could be interpreted to mean it still applies to a (mounted) 
device, but it doesn't - olddir can be anywhere, not just a mount point.


Richard



Re: MDs & Dentists

2021-07-21 Thread Richard Hector

On 22/07/21 3:38 am, Reco wrote:

One sure way to beat ransomware is to
take immutable backups


That's fine if keeping access to your data is all you care about.

With the more modern ransomware that threatens to publish your (and/or 
your customers') data, not so much.


Richard



Apparmor messages on LXC container, after host upgrade to buster

2021-06-23 Thread Richard Hector

Hi all,

This is a copy of a message I posted to lxc-users last week; maybe more 
people will see it here :-)


I'm getting messages like this after an upgrade of the host from stretch 
to buster:


Jun 18 12:09:08 postgres kernel: [131022.470073] audit: type=1400 
audit(1623974948.239:107): apparmor="DENIED" operation="mount" 
info="failed flags match" error=-13 profile="lxc-container-default-cgns" 
name="/" pid=15558 comm="(ionclean)" flags="rw, rslave"


I've seen several similar things from web searches, such as this from 
the lxc-users list, 5 years ago:


https://lxc-users.linuxcontainers.narkive.com/3t0leW0p/apparmor-denied-messages-in-the-logs

The suggestion seems to be that it doesn't matter, as long as mounts are 
actually working ok (all filesystems seem to be mounted).


But if the mounts are working, what triggers the error? If the mounts 
are set up outside the container, why is the container trying to mount 
anything? There's nothing in /etc/fstab in the container.


In case it's relevant, /var/lib/lxc//rootfs is a mount on the 
host, for all containers. All containers have additional mounts defined 
in the lxc config, and those filesystems are also mounts on the host, 
living under /guestfs. They're all lvm volumes, with xfs, as are the 
root filesystems.


Any tips welcome.

Cheers,
Richard



Re: Web log analysis

2021-06-23 Thread Richard Hector

On 27/05/21 9:55 pm, Richard Hector wrote:

Hi all,

I need to get a handle on what my web servers are doing.


Apologies for my lack of response.

Thanks for all of the useful and interesting replies.

I'll look into this further later; in the meantime I think I solved my 
immediate needs with tools like grep :-)


Cheers,
Richard



Re: debian installation issue

2021-06-23 Thread Richard Hector

On 22/06/21 12:54 am, Steve McIntyre wrote:

[ Apologies, missed this last week... ]

to...@tuxteam.de wrote:


On Mon, Jun 14, 2021 at 09:20:52AM +0300, Andrei POPESCU wrote:

On Vi, 11 iun 21, 15:07:11, Greg Wooledge wrote:
> 
> Secure Boot (Microsoft's attempt to stop you from using Linux) relies on

> UEFI booting, and therefore this was one of the driving forces behind it,
> but not the *only* driving force.  If your machine doesn't use Secure Boot,
> don't worry about it.  It won't affect you.

While I'm not a fan of Microsoft:

https://wiki.debian.org/SecureBoot#What_is_UEFI_Secure_Boot_NOT.3F


Quoting from there:

 "Microsoft act as a Certification Authority (CA) for SB, and they will
  sign programs on behalf of other trusted organisations so that their
  programs will also run."

Now two questions:

- do you know any other alternative CA besides Microsoft who is
  capable of effectively doing this? In a way that it'd "work"
  with most PC vendors?


I've been in a number of discussions about this over the last few
years, particularly when talking about adding arm64 Secure Boot and
*maybe* finding somebody else to act as CA for that. There's a few
important (but probably not well-understood) aspect ofs the CA role
here:

  * the entity providing the CA needs to be stable (changing things is
expensive and hard)
  * they need to be trustworthy - having an existing long-term business
relationship with the OEMs is a major feature here
  * they need to be *large* - if there is a major mistake that might
cause a problem on a lot of machines in production, the potential
cost liability (and lawsuits) from OEMs is *huge*

There are not many companies who would fit here. Intel and AMD are
both very interested in enhancing trust and security at this kind of
level, but have competing products and ideas, for example.


Is that something that needs to be done by one company? Perhaps because 
of how SecureBoot is implemented?


I'd prefer to be able to add Debian's key either in addition to or 
instead of Microsoft's, which could also be happily installed alongside 
those of Intel, AMD, your favourite government security agency or 
whoever. And Debian can get theirs signed by whichever of those they 
might think is appropriate. But I want to be able to reduce that list to 
just Debian's, or just the EFF's, or mine. Whatever combination I choose.


I think that should all work ok? Changing things, rather than being 
expensive and hard, should just be a matter of either getting a new 
organisation to sign Debian's key, and/or having them revoke one. As one 
of those on the list.


As an aside, I'd like to see this with web certificates too - I want to 
be able to get my cert signed by LetsEncrypt _and_ whatever commercial 
CA or CAs I choose, so if one of them does something stupid and needs to 
be removed from the list of approved CAs, it doesn't break the internet, 
because any significant site will have its certs signed by others as well.


Richard



Re: A Proposal: Each of Online Debian Man pages could have a wiki (Main page / Talk Page, etc.) at its bottom, with only Example Code Lines ...

2021-06-19 Thread Richard Hector

On 19/06/21 2:28 pm, Susmita/Rajib wrote:

Aren't the ML members aware that Debian already has a Man Wiki pages
repository? Debian Man Wiki Pages are available at:
https://manpages.debian.org/


I'm pretty sure that's not a wiki.

It looks like a set of automatically generated static pages.

Richard



Re: Server setup

2021-06-14 Thread Richard Hector

On 15/06/21 9:26 am, Greg Wooledge wrote:

On Mon, Jun 14, 2021 at 04:39:11PM -0400, Polyna-Maude Racicot-Summerside wrote:

I would like to have my system running on different partition for home,
usr, var, tmp, etc... This is a safe route to prevent some problem (such
as filling up a partition that risk trashing the system).


"etc."?  As in, that's NOT EVEN THE ENTIRE LIST ?!?

Come on.  Get real here.


Or /etc, which might be worse ...

Richard



Web log analysis

2021-05-27 Thread Richard Hector

Hi all,

I need to get a handle on what my web servers are doing.

I remembered the name awstats, and installed it, but when I finally got 
it going (I had a typo in my nginx custom format), the output was 
somewhat opaque.


Things I'd like:

Ability to safely provide access to my customers
Ability to show trends over a decent length of time (at least a year)
Reasonable support for custom log formats, and ideally be helpful if I 
get it wrong ("It doesn't match" really wasn't very helpful)

Easy to navigate different views
Graphs would be nice
Not too much load on the web server
Free Software
Included in Debian preferred

Many of my sites are Wordpress sites, so I could choose from the many 
available plugins, but I suspect those will probably only start 
reporting from when they're installed; I have existing logs going back 
anything up to a few years. Generic is probably better.


Any suggestions?

Cheers,
Richard



Re: kernel: perf: interrupt took too long

2021-05-26 Thread Richard Hector

On 24/05/21 9:50 pm, The Wanderer wrote:

On 2021-05-23 at 23:55, Richard Hector wrote:


Hi all,

I see messages like this frequently for a day or two after rebooting
a particularly slow old machine (Atom-based HP thin client, running
as an OpenVPN endpoint):

May 23 05:36:37 ovpn kernel: [14268.392418] perf: interrupt took too
long (4020 > 3996), lowering kernel.perf_event_max_sample_rate to
49750


I get this same behavior, except that my machine is not in any way slow
(except for intermittent issues with the secondary SATA controller on
the motherboard).


Would it be a good idea to set this value at boot time, rather than
waiting for it to auto-adjust down till it settles?


I wouldn't think so; if nothing else, it doesn't always seem to settle
out to the same value every time.



Thanks all,

I'll ignore it :-)

Cheers,
Richard



kernel: perf: interrupt took too long

2021-05-23 Thread Richard Hector

Hi all,

I see messages like this frequently for a day or two after rebooting a 
particularly slow old machine (Atom-based HP thin client, running as an 
OpenVPN endpoint):


May 23 05:36:37 ovpn kernel: [14268.392418] perf: interrupt took too 
long (4020 > 3996), lowering kernel.perf_event_max_sample_rate to 49750


Would it be a good idea to set this value at boot time, rather than 
waiting for it to auto-adjust down till it settles?


Actually I don't know if it's because the machine is slow; it's just the 
only machine I see this on.


Cheers,
Richard



Re: can linux run on hp t610 thin client?

2021-05-21 Thread Richard Hector

On 22/05/21 1:39 pm, Long Wind wrote:

i'm about to buy hp t610, thanks!

https://support.hp.com/us-en/document/c03235347 



A quick search of the web suggests yes:

https://www.parkytowers.me.uk/thin/hp/t610/linux.shtml

Richard



Re: OT: minimum bs for dd?

2021-05-17 Thread Richard Hector

On 17/05/21 6:30 pm, to...@tuxteam.de wrote:

This is one point. The other, which adds more convenience is that
dd has an explicit argument for (input and) output file name, whereas
cat relies on redirection. This becomes relevant when you try to

   sudo cat thing > that_other_thing

and realise that the ">" is *not* in the sudo context (and what
you would have to do to "fix" that).

Instead,

   sudo dd if=thing of=this_other_thing

Just Works out of the box. More relevant when doing ">>" (use
dd's oflag=append then).


And if you already have a long pipeline in your command history, that 
didn't work because of the above issue, you can use:


... |sudo dd of=this_other_thing

Note: I'm not sure what the difference is between that_other_thing and 
this_other_thing :-)


Richard



Re: What is the best (and free) Linux softphone?

2021-05-07 Thread Richard Hector

On 6/05/21 7:59 am, Weaver wrote:



https://jami.net/


I get puzzled by sites like that that don't seem to say _what_it_is_ ...

Luckily I can get that info from the debian package info :-)

Richard



Re: Firefox HTTPS-only mode breaks sites that return 404 for HTTPS connections

2021-04-15 Thread Richard Hector

On 16/04/21 1:32 am, Dan Ritter wrote:

Last step: create a cron job to run once a week that does
this:

certbot renew && \
cat /etc/letsencrypt/live/eyeblinkuniverse.com/privkey.pem \
/etc/letsencrypt/live/eyeblinkuniverse.com/cert.pem > \
/etc/letsencrypt/live/eyeblinkuniverse/merged.pem && \
service lighttpd restart


Doesn't the certbot package create a cronjob/timer for you?

And I'd probably put the merge in a deploy hook, rather than modifying 
the cronjob and/or systemd timer.


Not to say the above won't work, of course - except that if you create 
or modify the cronjob without touching the systemd timer (and you're 
using systemd), you might get unexpected results.


And I haven't tested my thoughts :-)

Richard



Re: device names - so much escaping

2021-04-05 Thread Richard Hector

On 5/04/21 11:48 pm, Greg Wooledge wrote:

On Mon, Apr 05, 2021 at 09:29:59PM +1200, Richard Hector wrote:

/dev/vg-backup0/d-rh-rm1-home



/dev/mapper/vg--backup0-d--rh--rm1--home



Apr  5 07:06:25 backup systemd[1]:
dev-mapper-vg\x2d\x2dbackup0\x2dd\x2d\x2drh\x2d\x2drm1\x2d\x2dsrv.device:
Job 
dev-mapper-vg\x2d\x2dbackup0\x2dd\x2d\x2drh\x2d\x2drm1\x2d\x2dsrv.device/start
failed with result 'timeout'.



But is this all really necessary? Can't these tools work without assigning
special meaning to ordinary characters?


Well, you're asking in the wrong place.  We're just end users here.  If
you want the tools to behave differently, you need to get in touch with
their respective developers or support forums.



True, true.

I guess I was hoping for general comment, whether anyone had insights, 
or agreed/disagreed with me :-)


Richard



device names - so much escaping

2021-04-05 Thread Richard Hector

Hi all,

I use LVM quite a lot.

> richard@backup:~$ sudo lvs|wc -l
> 140


The trouble is, things like device mapper seem to involve lots of name 
translations.


So the volume I call

d-rh-rm1-home

  (for dirvish backups of /home on rh-rm1 (my (rh) first (1) redmine 
(rm) server)) on


vg-backup0

  which is understandably known (to me) as

/dev/vg-backup0/d-rh-rm1-home

  is then known by its alternative name:

/dev/mapper/vg--backup0-d--rh--rm1--home

  because dev mapper seems to need '-' for its own purposes.

But then if systemd has a problem with it, I see a log line like this:

Apr  5 07:06:25 backup systemd[1]: 
dev-mapper-vg\x2d\x2dbackup0\x2dd\x2d\x2drh\x2d\x2drm1\x2d\x2dsrv.device: 
Job 
dev-mapper-vg\x2d\x2dbackup0\x2dd\x2d\x2drh\x2d\x2drm1\x2d\x2dsrv.device/start 
failed with result 'timeout'.


  in my logcheck email. Which is nearly unintelligible.

Now I could make the effort to avoid using '-' in my volume names (and 
mount points, which also get messed up in systemd's reporting).


But is this all really necessary? Can't these tools work without 
assigning special meaning to ordinary characters?


(The problem systemd's reporting, of course, is that I must have 
forgotten to "systemctl daemon-reload" after editing /etc/fstab ... that 
never used to be needed either.)


Grumpily,

Richard



Re: Conflicting alternatives

2021-02-18 Thread Richard Hector

On 19/02/21 2:34 am, Andrei POPESCU wrote:

On Jo, 18 feb 21, 08:15:39, Dan Ritter wrote:
Richard Hector wrote: 
> On 18/02/21 5:22 am, Greg Wooledge wrote:

> > On Thu, Feb 18, 2021 at 12:06:37AM +0800, Kevin Shell wrote:
> > > You could stop one and start the other,
> > > there's no resources or port conflict.
> > > I want to just keep both, not run them at the same time.
> > 
> > Again, as stated at the start of this fiasco of a thread, Debian policy

> > says that all daemons must be started up by default.
> 
> It is possible to install both nginx and apache2 at the same time. Both

> presumably try to get port 80? Not sure how that resolves; I don't have a
> machine I want to try it on at the moment.

The order of events is:

- install one. Change the listening port to something other than
  80.

- install the next.

Web servers are built to interoperate with each other; it is not
ridiculous to have a dozen web servers on a machine each
listening to different ports, or listening on sockets and being
proxied by a different web server.


It seems to me the important difference is that it is comparatively easy
and common to interact with a webserver on a non-standard port, whereas
running a SMTP server on a non-standard port might be useful only in
very specific corner cases.


There are multiple standard ports, though. E.g. one could easily run one 
MTA on port 25 for incoming mail, and another on 587 for outgoing.


My point, though, was that there's a precedent for daemons that default 
to listening on the same port, yet are co-installable. And which will, 
IIRC, throw errors on installation if you try it without paying special 
attention.


Richard



Re: Conflicting alternatives

2021-02-17 Thread Richard Hector

On 18/02/21 5:22 am, Greg Wooledge wrote:

On Thu, Feb 18, 2021 at 12:06:37AM +0800, Kevin Shell wrote:

You could stop one and start the other,
there's no resources or port conflict.
I want to just keep both, not run them at the same time.


Again, as stated at the start of this fiasco of a thread, Debian policy
says that all daemons must be started up by default.


It is possible to install both nginx and apache2 at the same time. Both 
presumably try to get port 80? Not sure how that resolves; I don't have 
a machine I want to try it on at the moment.


Richard



Re: website permissions and ownership

2021-02-02 Thread Richard Hector

On 2/02/21 10:42 pm, Jeremy Ardley wrote:


On 2/2/21 5:32 pm, Jeremy Ardley wrote:


On 2/2/21 4:55 pm, Richard Hector wrote:


What you are doing sounds pretty O.K. Though I personally also use 
SELinux for web facing services.


Thanks.

I haven't looked in to SELinux. I looked at AppArmor, but it appears 
that it won't work as expected in an LXC container, which is where I 
run this. Would SELinux work there? SELinux, from what I can see, 
seems more complex to learn than AppArmor.


SELinux is quite hard to get right, but when it's done properly it's 
very hard to exploit. Basically if it's not explicitly permitted it's 
forbidden.


SELinux has the advantage that it by default enforces rules that you 
should probably already have in place. So for example it will 
automatically stop writes to web content by the web server. You have 
to explicitly allow the web server to make modifications to specific 
files or directories. SELinux makes you think about what is important 
to you and what you think should be alterable on your website.


Getting back to my staging scenario, you start with default SELinux 
rules completely restricting web server write access to content. You'd 
have another set of SELinux rules that allow some other process to 
make changes to the content. You may even have a set of SELinux rules 
allowing the web server to write to an upload directory - but likely 
not read from it.




Further to this, web servers can interact not only with disk content, 
but databases, content back-ends (e.g. php-fpm) and even with hardware 
and communication devices. SELinux blocks all this until such time as 
you do the analysis and decide that particular interactions should be 
allowed.


It's a pain to get right, but compared to the pain of your server being 
exploited, not so much.


You've reminded me that of course nginx (in my case) as well as php-fpm 
needs read access to a bunch of stuff (not php ... unless it's a site 
that publishes php scripts ...), but no write to anything. So I'll need 
to revise my model for that, at least :-(


Though I guess that can be covered by 'other' permissions (with nginx 
config to prevent serving php and other files that it shouldn't).


I think I'm leaving SELinux in the 'too hard' basket for the time being; 
it looks like it would need changes to a bunch of other stuff as well 
(eg postfix ...)


Thanks,
Richard



Re: website permissions and ownership

2021-02-02 Thread Richard Hector

On 2/02/21 10:37 pm, john doe wrote:

On 2/2/2021 9:55 AM, Richard Hector wrote:

On 2/02/21 9:11 pm, Jeremy Ardley wrote:


On 2/2/21 3:09 pm, Richard Hector wrote:

Hi all,

I'm reviewing how I set up websites (mostly Wordpress at the moment),
and would like other opinions on what I'm planning is sane.

My plan is to have a user eg "mysite" that owns all/most of the
standard files and directories.

The webserver (actually php-fpm) would run as "mysite-run".

Group ownership of the files would then be mysite-run, but
group-write permission would not be granted except where required, eg
the 'uploads' and 'cache' directories.

Files in those directories, created by the php-fpm process, would
obviously be owned by mysite-run.

Alternatively the group ownership of most of the directories could
remain with mysite, and but the uploads and cache directories
group-owned (and group-writeable) by mysite-run.

The objective of course is that site code can't write to anything it
shouldn't. I know that means that I'll have to install upgrades,
plugins etc with the wp cli tool.

I earlier had thoughts of improving this with ACLs, but a) this got
really complicated and b) it didn't seem to solve some of the
problems I was trying to solve.

I wanted to be able to allow other users (those who might need to
update sites) to be able to log in as themselves and make changes,
but IIRC nothing (other than sudo or setuid tools) will allow them to
set the ownership back to 'mysite', which is what I want it to be.
I'm aware of bindfs, which allows fuse mounting of filesystems with
permission translation, but as far as I can tell, it doesn't allow
mapping of userids. Tools could help, but I'd rather some of these
users had SFTP access only, which would prevent them being used.

Any thoughts?
Am I mostly on the right track?

Thanks,
Richard



What you are doing sounds pretty O.K. Though I personally also use
SELinux for web facing services.


Thanks.

I haven't looked in to SELinux. I looked at AppArmor, but it appears
that it won't work as expected in an LXC container, which is where I run
this. Would SELinux work there? SELinux, from what I can see, seems more
complex to learn than AppArmor.


To accomodate other users I suggest you set up staging areas where
they can upload content that you periodically sync to the website
using a privileged process. This means you don't have to give any
rights to users other than access to the staging areas.


Yes. I can foresee difficulties with my clients not being able to see
their changes immediately.

Inotify could be of interest there by monitoring the staging area.


1)  https://man7.org/linux/man-pages/man7/inotify.7.html


Agreed. Worth bearing in mind, thanks. Though IIRC it's quite a pain to 
keep watch on an entire directory tree; you have to maintain the list of 
watched directories rather than just watching the top.


Cheers,
Richard



Re: website permissions and ownership

2021-02-02 Thread Richard Hector

On 2/02/21 9:11 pm, Jeremy Ardley wrote:


On 2/2/21 3:09 pm, Richard Hector wrote:

Hi all,

I'm reviewing how I set up websites (mostly Wordpress at the moment), 
and would like other opinions on what I'm planning is sane.


My plan is to have a user eg "mysite" that owns all/most of the 
standard files and directories.


The webserver (actually php-fpm) would run as "mysite-run".

Group ownership of the files would then be mysite-run, but group-write 
permission would not be granted except where required, eg the 
'uploads' and 'cache' directories.


Files in those directories, created by the php-fpm process, would 
obviously be owned by mysite-run.


Alternatively the group ownership of most of the directories could 
remain with mysite, and but the uploads and cache directories 
group-owned (and group-writeable) by mysite-run.


The objective of course is that site code can't write to anything it 
shouldn't. I know that means that I'll have to install upgrades, 
plugins etc with the wp cli tool.


I earlier had thoughts of improving this with ACLs, but a) this got 
really complicated and b) it didn't seem to solve some of the problems 
I was trying to solve.


I wanted to be able to allow other users (those who might need to 
update sites) to be able to log in as themselves and make changes, but 
IIRC nothing (other than sudo or setuid tools) will allow them to set 
the ownership back to 'mysite', which is what I want it to be. I'm 
aware of bindfs, which allows fuse mounting of filesystems with 
permission translation, but as far as I can tell, it doesn't allow 
mapping of userids. Tools could help, but I'd rather some of these 
users had SFTP access only, which would prevent them being used.


Any thoughts?
Am I mostly on the right track?

Thanks,
Richard



What you are doing sounds pretty O.K. Though I personally also use 
SELinux for web facing services.


Thanks.

I haven't looked in to SELinux. I looked at AppArmor, but it appears 
that it won't work as expected in an LXC container, which is where I run 
this. Would SELinux work there? SELinux, from what I can see, seems more 
complex to learn than AppArmor.


To accomodate other users I suggest you set up staging areas where they 
can upload content that you periodically sync to the website using a 
privileged process. This means you don't have to give any rights to 
users other than access to the staging areas.


Yes. I can foresee difficulties with my clients not being able to see 
their changes immediately. I could also probably use a git hook to 
deploy a suitably tagged branch, but then I also probably need to help 
my clients use git :-) Or if I had some kind of web portal for them, I 
could give them a deploy button, but I'm not ready to do that yet.


This also helps in disaster recovery as you can set up and maintain the 
entire static site from staging areas. Ideally you should be able to 
fire up a virtual server and load it from the staging area whenever you 
want. If it goes down, fire up another one.


Your only issue is database records for which you'll need to set up a 
different recovery process.


Useful points too.

Thanks,
Richard



website permissions and ownership

2021-02-01 Thread Richard Hector

Hi all,

I'm reviewing how I set up websites (mostly Wordpress at the moment), 
and would like other opinions on what I'm planning is sane.


My plan is to have a user eg "mysite" that owns all/most of the standard 
files and directories.


The webserver (actually php-fpm) would run as "mysite-run".

Group ownership of the files would then be mysite-run, but group-write 
permission would not be granted except where required, eg the 'uploads' 
and 'cache' directories.


Files in those directories, created by the php-fpm process, would 
obviously be owned by mysite-run.


Alternatively the group ownership of most of the directories could 
remain with mysite, and but the uploads and cache directories 
group-owned (and group-writeable) by mysite-run.


The objective of course is that site code can't write to anything it 
shouldn't. I know that means that I'll have to install upgrades, plugins 
etc with the wp cli tool.


I earlier had thoughts of improving this with ACLs, but a) this got 
really complicated and b) it didn't seem to solve some of the problems I 
was trying to solve.


I wanted to be able to allow other users (those who might need to update 
sites) to be able to log in as themselves and make changes, but IIRC 
nothing (other than sudo or setuid tools) will allow them to set the 
ownership back to 'mysite', which is what I want it to be. I'm aware of 
bindfs, which allows fuse mounting of filesystems with permission 
translation, but as far as I can tell, it doesn't allow mapping of 
userids. Tools could help, but I'd rather some of these users had SFTP 
access only, which would prevent them being used.


Any thoughts?
Am I mostly on the right track?

Thanks,
Richard



Re: megacli help

2021-01-09 Thread Richard Hector

On 7/01/21 5:28 am, basti wrote:

Hello, I want to set all my drives to RAID0 to use mdadm.


Is there any advantage in that?

I entirely agree with using mdadm rather than hardware raid, given the 
choice, since it allows for switching the disks into a system with a 
different card/adapter. But since RAID0 in this case is not JBOD, it's 
my understanding you can't do that anyway. IIRC last time I had to deal 
with this, we decided to just use the HW RAID and take advantage of the 
(slight) performance improvement (we measured it). But that's a long 
time ago now.


Richard



Re: recommendations for supported, affordable hardware raid controller.

2021-01-02 Thread Richard Hector

On 3/01/21 12:24 am, Andrei POPESCU wrote:

On Sb, 02 ian 21, 01:40:14, David Christensen wrote:


On Linux (including Debian), MD (multiple disk) and LVM (logical volume
manager) are the obvious choices for software RAID.  Each have their
respective learning curves, but they're not too high.


An interesting article I stumbled upon:
http://www.unixsheikh.com/articles/battle-testing-data-integrity-verification-with-zfs-btrfs-and-mdadm-dm-integrity.html


Hmm. It only talks about software raid in the context of RAID-5. They 
acknowledge that RAID-5 is 'frowned upon', but don't go into why, and 
say they think it's great. My take: once you've lost one disk, you have 
the same reliability as a RAID-0 (stripe) set of what's left - much less 
reliable than no RAID at all.


I generally stick with RAID-1, but would consider RAID-10.


My take:

If you care about your data you should be using ZFS or btrfs.


Licensing issues and the resulting complications stop me using ZFS, and 
last I heard btrfs wasn't regarded as being as reliable as ext3/4 or xfs 
(I generally use xfs).


I may be out of date, and I've heard bad comments about xfs too ...


In case of data corruption (system crash, power outage, user error, or
even just a HDD "hiccup") plain md without the dm-integrity layer won't
even be able to tell which is the good data and will overwrite your good
data with bad data. Silently.


I guess I need to investigate that. Any further references? I've had 
crashes and power outages and never noticed any problems, not that that 
means they won't happen (or even that they haven't happened). Does a 
journalling filesystem on top not cover that?


Cheers,
Richard



Re: mdadm usage

2021-01-01 Thread Richard Hector

On 31/12/20 7:29 am, Marc Auslander wrote:

IMHO, there are two levels of backup.  The more common use is to undo
user error - deleting the wrong thing or changing something and wanting
to back out.  For that, backups on the same system are the most
convenient.  And if its on the same system, and you have raid1, you
don't need a separate physical drive.


Backing up to (or restoring from) the same drive will be slow. 
Especially if it's spinning disks, with seek time considerations.


Richard



  1   2   3   4   5   6   7   8   9   10   >