Re: [DNG] Wifi dropping ramdomly

2022-08-22 Thread Bob Proulx via Dng
Jim Murphy via Dng wrote:
> My wifi connection drops randomly. It can sometimes run for days.
> Other times it may drop within hours of a reboot. There seems to be
> no pattern to when. Initially rebooting seemed to be the only way to
> get the network back. Under xfce4 disabling the wifi and/or network,
> then re-enabling them had no effect. After many google searches
> I finally found at:
>
> https://askubuntu.com/questions/26054/how-to-restart-wifi-interface-without-rebooting-it-drops-connection
>
> I now have the alias(zsh):
>
> alias reload_network_module='sudo modprobe -r iwldvm && sudo modprobe
> iwldvm'
>
> to re-enable the wifi. So far this has restored the network connection
> without having to reboot. Strange as it may seem, the wifi  connection
> has stopped dropping as frequently since reloading the iwldvm module
> this way.?? FWIW

Is this problematic WiFi Access Point encrypted using WPA/WPA2?  Or is
it an open not encrypted Access Point?

I find that WPA/WPA2 APs are perfectly reliable using wpasupplicant
which will handle glitches and reconnecting without problem.  But I
find that open APs are problematic.  They will often glitch and drop
the connection.  And since those are connected to directly from the
Linux kernel the connection simply drops.  The WiFi manager utility
monitoring the connection must notice this and retry the connection.
Being a glitch the retry always succeeds.  But how well this is
handled depends upon the connection manager.

Up until now I have been using WICD.  But of course as all of us know
that is a python 2 program and as we upgrade from Beowulf to Chimaera
we wicd users are forced to find alternatives.  The connman network
connection manager is often recommended for those like me who have
been frustrated by network-manager.

> Computer:  ThinkPad X230
> configuration: broadcast=yes driver=iwlwifi

I normally use an X220 with the same driver.  I am using the
firmware-iwlwifi package from non-free.  I have no problems with
WPA/WPA2 sites.  And some open APs are perfectly reliable.  While
other open APs glitch and drop the connection very often.  The wicd
manager handles these seemlessly.

>  % ls /lib/modules/**/iwldvm.ko
> /lib/modules/5.10.0-10-amd64/kernel/drivers/net/wireless/intel/iwlwifi/dvm/iwldvm.ko
> /lib/modules/5.10.0-9-amd64/kernel/drivers/net/wireless/intel/iwlwifi/dvm/iwldvm.ko

That's the free software portion that is in Linux kernel main.
However to install the iwlwifi driver you will need the files from the
firmware-iwlwifi package.  Do you have it installed?

dpkg -l | grep firmware-iwlwifi

If not then enable the non-free suite and install it.

apt-get install firmware-iwlwifi

> If anyone has any advice it would be appreciated. Since I have a
> workaround it is just an annoyance.

For some debugging one can run ip monitor and see the status changes
of network devices as they happen.

ip monitor

And then for example I might associate with an open AP.

sudo iw dev wlan0 connect "Starbucks WiFi"

And then I might see this from "ip monitor".  Also similar data is
logged to the /var/log/syslog file.

10.0.0.1 dev wlan0 lladdr 74:83:c2:78:b0:8b STALE
10.0.0.1 dev wlan0 lladdr 74:83:c2:78:b0:8b PROBE
10.0.0.1 dev wlan0 lladdr 74:83:c2:78:b0:8b REACHABLE
10.0.0.1 dev wlan0 lladdr 74:83:c2:78:b0:8b STALE
10.0.0.1 dev wlan0 lladdr 74:83:c2:78:b0:8b PROBE
10.0.0.1 dev wlan0 lladdr 74:83:c2:78:b0:8b REACHABLE

Everything is all good.  And then see this.

Deleted 10.0.0.1 dev wlan0 lladdr 74:83:c2:78:b0:8b REACHABLE
Deleted ff02::2 dev wlan0 lladdr 33:33:00:00:00:02 NOARP
Deleted ff02::16 dev wlan0 lladdr 33:33:00:00:00:16 NOARP
2: wlan0:  mtu 1500 qdisc noqueue state 
DOWN group default
link/ether 20:1e:88:78:61:94 brd ff:ff:ff:ff:ff:ff
2: wlan0: 
link/ether
10.0.0.1 dev wlan0  FAILED
10.0.0.1 dev wlan0  FAILED
10.0.0.1 dev wlan0  FAILED
10.0.0.1 dev wlan0  FAILED
10.0.0.1 dev wlan0  FAILED
Deleted 2: wlan0inet 10.0.8.118/20 brd 10.0.15.255 scope global dynamic 
wlan0
   valid_lft 3826sec preferred_lft 3826sec
Deleted 10.0.0.0/20 dev wlan0 proto kernel scope link src 10.0.8.118 
linkdown
Deleted broadcast 10.0.15.255 dev wlan0 table local proto kernel scope link 
src 10.0.8.118 linkdown
Deleted local 10.0.8.118 dev wlan0 table local proto kernel scope host src 
10.0.8.118
Deleted 10.0.0.1 dev wlan0  INCOMPLETE

A glitch has caused the connection to the open access point to fail
and the connection was dropped.  I find that the local Starbucks WiFi
does this very often.  Another local coffee shop with an open AP
however is absolutely reliable and never drops the connection.

A network connection manager would retry the connection and that would
immediately succeed.  Here I am working the task manually so I am the
connection manager.

sudo iw dev wlan0 connect "Starbucks WiFi"

That's normally handled automatically by 

Re: [DNG] mutt attachment problem

2022-08-22 Thread Bob Proulx via Dng
Antony Stone wrote:
> Haines Brown wrote:
> > The size of the zip file is 164 Mb
>
> Just in case it's actually your mail service provider responding to your mail
> client trying to send such a large message, try the same thing but with a
> small zip file (such as 2Mb) to see whether that nakes the difference.

164MB!  OMG!  Repeat after me.  "SMTP is not a file transport
protocol."  The likelihood of being able to send a 164MB email message
from one random system on the net to another random system on the net
today is vanishingly small.  The default for most popular mail
transport agents even today is 10MB max in size.  Though most sites
have increased that to at least 50MB and some to 100MB due to people
trying to send photos through email.  But 164MB?  I don't know of any
site that allows such a large single email.  Gmail limits message size
to 25MB https://support.google.com/a/answer/1366776?hl=en.

Since email is not a file transport protocol I suggest using a
different method to transfer those files.  I am worthless for
suggesting a large binary file drop method since I have my own web
site and so I always use it for these things.  I just copy it there
and pass along a URL.  But I know that not everyone maintains their
own servers and associated web sites.

Perhaps some kind souls on the list might suggest possible ways to
send large binary files?  That's the real problem and it needs a real
solution.

Bob


signature.asc
Description: PGP signature
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Upgrade & migrate?

2022-07-07 Thread Bob Proulx via Dng
Antony Stone wrote:
> My definition of skipping releases would be:
>
>   Ascii (skip Beowulf) -> Chimaera
>
>   Stretch (skip Buster) -> Bullseye
>
>   Stretch (skip Buster or Beowulf) -> Chimaera

Agreed.  Those would be skipping releases.  Not supported.

> I'm not asking about any of those.

I almost trimmed all of the above out of my response here.  But left
it because it is good to have confirmation and clarity and the above
are good concrete examples.

> I'm asking about going from one version of Debian to the next version of
> Devuan.

>>> - upgraded and migrated from the previous Debian release (eg: Stretch -> 
>>> Beowulf)
>>> Is that last option still valid for eg: Buster -> Chimaera?

First, I am not an authoritative answerer, caution, this is just me, but...

I think what you are asking is okay to do and you can be successful
doing it.  I have done it several times with success.  But this might
be because of what I have or do not have installed or because I have
extensively cleaned my system before the upgrade.  With any upgrade
the best success is when it has had the best preparation.


https://www.proulx.com/~bob/doc/bobs-guide-to-upgrading-debian/bobs-guide-to-upgrading-debian.html

I think what you want to do will work but it is not the documented
procedure.  The documented procedure is to go from Debian 10 Buster to
Debian 11 Bullseye and then from Bullseye to Chimaera.  That's well
documented within each OS.  So you are somewhat skipping one of the
connecting points.  But you are NOT skipping an OS release in the
classic way that is definitely not supported.  You will be seeing all
of the transition points along the way.  Transitions dealing with
Perl, Python, and so on of all of the library transions.

The important thing is to be aware of the manual steps needed to step
from systemd as init to a supported init in Devuan.  The process is
well documented for going from Bullseye to Chimaera here.


https://www.devuan.org/os/documentation/install-guides/chimaera/bullseye-to-chimaera

If you decide to avoid the Debian 10 Buster to Debian 11 Bullseye
upgrade and go directly to Devuan 4 Chimaera then you are stepping
outside of the well traveled path.  Depending upon the exact
collection of packages you have installed on your system there might
be some snag that would cause problems.  If you feel confident of your
abilities to self-rescue things in the case of problems then I don't
see any major reason it would not work for you.

But why tempt fate?  What's the actual savings of avoiding the
traveled path?  Chimaera is an overlay on Bullseye.  Therefore
upgrading from Debian 10 Buster to 11 Bullseye is actually on the path
to Chimaera.  And then the upgrade form Bullseye to Devuan 4 Chimaera
is a much smaller upgrade than from Devuan 3 Beowulf.  Most packages
are the exact same packages.  Only a few are different.

If I were upgrading a 100 servers in a datacenter then I would test
upgrade a handful to verify the process and then do all of them and it
would save quite a bit of time.  But if I were just upgrading my own
desktop and just wanted to get done then I would just go ahead and do
the documented path.  It would save time in the long run to stay on
the path.  Slow is fast and fast is slow.

On the other hand if it was me and I wanted to test if this were
working then I would snapshot a VM for perfect safety and then try it
and see what issues occurred.  Do it on something that if it failed
that it would not be a problem.

Good Luck!
Bob


signature.asc
Description: PGP signature
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] strange effect on overwrite

2022-06-23 Thread Bob Proulx via Dng
ael via Dng wrote:
> Isn't conv=sync needed to make sure that you are not just seeing buffer
> effects?

Better oflags=sync,direct rather than conv=sync though both are
similar in result.

Bob


signature.asc
Description: PGP signature
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] assistance sought to repair install - - re:python

2022-05-18 Thread Bob Proulx via Dng
Antony Stone wrote:
> o1bigtenor wrote:
> > I was trying to clean up my /usr/bin directory so was discarding older
> > versions of software.
>
> You should never manually delete binaries or library files which have been
> installed by a package management system.  If you want to get rid of them use
> something like "aptitude purge python3.9" so that:

Agreed.  If it is in /usr/bin:/bin then leave it to the package
management tools.  /usr/local/bin is available for local admin hacking
if that is desired.

> > Had python3.10 available so thought (foolishly - - now clear) that
> > python3.9 was a great candidate for discard.
> > Nope nix and no way.
> >
> > Now I have myself more than a bit of a mess with my installed python
> > systems.
> >
> > Suggestions as to how to repair my 'mess'?
>
> Have you tried simply "aptitude install python3.9" or possibly "aptitude
> reinstall python3.9" etc (for each of the versions of python which you seem to
> be having problems with)?

I would "install --reinstall" the list of things that were manually
removed.  That should install the /usr/bin components again.  For
example here is one command.

apt-get install --reinstall python3.9 python3.9-minimal

I am pretty sure that would work.  But for python2.7 I think the
better option might be to purge it from the package manager database.
Because python2 has been deprecated and has been removed from later
releases and is no longer available.  Therefore when you try a
reinstall it will likely fail due to not being available.  But go
ahead and see if it works.

apt-get install --reinstall python2.7 python2.7-minimal

But if that fails as I expect then I would try to purge it.

apt-get purge python2.7-minimal

Be careful with that purge action!  Look carefully at any dependencies
it is going to purge along with it.  If it is purging things you don't
want purged then say No and don't do it.

In the case that a manual remove of the binary created a problem then
a manual restoration of the binary would fix the problem.  Either
restore the /usr/bin/python2.7 from backup or copy it from another
system.  Since it is the same across all of the Devuan systems of the
same release and version level it means that if you don't have a local
backup that you could copy it from another system.  Maybe from a local
friend's system?  That would restore what was removed and things would
return to normal operation.

> > (Following is what the apt update upgrade process barfs up.
> > Errors were encountered while processing:
> >  python2.7-minimal
> >  python3.9-minimal
> >  python3.9
> >  python2.7
> >  python3.9-dev

For each of those you could look to see if they are available using
apt-cache to query it for information about installed and available
versions.

apt-cache policy python2.7-minimal
apt-cache policy python3.9-minimal
... and so on ...

Good luck!
Bob


signature.asc
Description: PGP signature
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] resolv.conf

2022-05-10 Thread Bob Proulx via Dng
Rowland Penny via Dng wrote:
> That is just a bandaid on something broken. If you have to stop
> something being changed, then there must be something trying to change
> it. You need to find what that 'something' is and stop that changing
> resolv.conf

+1!  But I completely understand those that chatr it immutable and
then get on with their lives.  Any port in a storm.  What I would
rather see is the software distribution address the problem more
globally so that people wouldn't need to stumble through it unaided.

Commentary: We are in the age of mobile devices.  By default
everything assumes that it is a mobile device.  Even in large
enterprise corporations IT groups are pushing for servers to be
configured as mobile devices.  Because as a practical matter you can't
configure mobile devices statically.  That would make no sense.  But
if you want everything to be configured exactly the same then it is
*possible*, even if not desirable IMNHO, to configure static servers
as a mobile device.  Maybe.  And that is the rub.

<>
Static...every possible configuration in between...Dynamic

I think most of the problems are near the Dynamic side where things
are messy.  But not all of the way to the side.  That's simple.  But
it is in that operating region near the Dynamic side but not fully
there that things are problematic.

On the Static side things are simple.  IP address, gateway router,
nameservers, domain search path, ntp servers, and nothing ever changes
again.

On the Dynamic side things are more complex.  It's like being in a
relationship.  There is the client wanting an IP address and there is
the DHCP server coordinating one.  Both sides have to negotiate for a
happy relationship.  And it is more than just an IP address and
gateway.  It's also DNS nameservers, default domain search path, ntp
servers, other things.  It is just like with people relationships.
"It's complicated!"

I want my mobile devices to ignore the offered domain search path and
always use the one I want it to use.  Even though I am on random
networks.  Where possible I want my mobile devices to my choice of
nameserver services in order to get my private domain names.  These
things need to be MERGED together in order to work properly.  It's
messy for me to do for myself where I know what I want.  It's very
hard for others to do for others where no one can agree on what they
want.

Some Details:

I am using isc-dhcp-client and the configuration there to always fix
my domain search path as I wish is to append a line like this to the
end of the /etc/dhcp/dhclient.conf file.  (man 5 dhclient.conf)

supersede domain-search "proulx.com";

And then the dhclient process will always set it as I have configured
and will ignore the server side offer.

I'll stop with that tidbit.  Because there are multiple dhcp clients
in use.  Everyone has a favorite.  Because there are many ideas on
DNS.  Install a local caching nameserver or use the DHCP provided one?
What about NTP servers?  Everyone has a different idea.  On my mobile
devices I install unbound and dynamically configure it depending upon
the network upon which I am connected.

Because of the many various implementations and the more plentiful
number of opinions on how things should be done it is really difficult
to fully automate the near-dynamic side of the scale.

Bob


signature.asc
Description: PGP signature
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] daedalus with chromiium

2022-04-17 Thread Bob Proulx via Dng
Haines Brown wrote:
> My message to the list seems to have gone amiss. This was it:

Your message did not go amiss.  The mailing list saw it.  I replied to
it.  You have replied to my message therefore I know you saw my
message too.  Unfortunately it seems that though we are seeing each
other's messages that we are definitely not communicating.

I am sorry that we are "talking past each other".  I tried to help.
Sorry it did not work out.

Bob


signature.asc
Description: PGP signature
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] daedalus with chromiium

2022-04-16 Thread Bob Proulx via Dng
Haines Brown wrote:
> I still get "Unable to locate package chromium". It seems that when I
> did update/dist-upgrade after modifying sources.list to accommodate
> daedulus I did not get the no InRelease file error. The chimaera
> previously installed on this disk did not have chromium installed.

What does this say for you?

apt-cache policy chromium

It should say something similar to this.

root@clash:~# apt-cache policy chromium
chromium:
  Installed: (none)
  Candidate: 100.0.4896.127-1
  Version table:
 100.0.4896.88-1 500
500 http://deb.debian.org/debian daedalus/main amd64 Packages

If it does not then investigate why not.

It's just possible that having changed suites that apt is waiting for
an acknowledgment.  This was added in some recent year.

   --allow-releaseinfo-change
   Allow the update command to continue downloading data from a
   repository which changed its information of the release contained
   in the repository indicating e.g a new major release. APT will fail
   at the update command for such repositories until the change is
   confirmed to ensure the user is prepared for the change. See also
   apt-secure(8) for details on the concept and configuration.

   Specialist options (--allow-releaseinfo-change-field) exist to
   allow changes only for certain fields like origin, label, codename,
   suite, version and defaultpin. See also apt_preferences(5).
   Configuration Item: Acquire::AllowReleaseInfoChange.

To acknowledge the change do this as root.

apt-get update --allow-releaseinfo-change
apt-get update

Then follow the acknowledgment by another update.  It's confusing but
as far as I can see from documentation and experimentation it needs
both of those.  The first to acknowledge and the second to actually do
the update of the new Packages indexes.  After that everything should
be available.

> I tried $ sudo apt-get --allowUnauthenticated install chromium, but
> this command was unable to locate chromium.
>
> I did not create a file /etc/apt/apt.conf/d/99myown with line
>
>   APT::Get::AllowUnauthenticated "true";
>
> because very insecure. In any case I'm not sure it would work

The main useful purpose of --allow-unauthenticated is for *older*
releases in the archive that have expired keys.  Because those are
archived they are not getting signed with updated keys.  The original
signatures are still as indicative as before but now they have expired.

   --allow-unauthenticated
   Ignore if packages can't be authenticated and don't prompt about
   it. This can be useful while working with local repositories, but
   is a huge security risk if data authenticity isn't ensured in
   another way by the user itself. The usage of the Trusted option for
   sources.list(5) entries should usually be preferred over this
   global override. Configuration Item:
   APT::Get::AllowUnauthenticated.


Daedalus is currently active and there should be no need to use the
--allow-unauthenticated option.

It's also useful for unsigned local repositories and other uses too.
Not to give the idea that it isn't ever useful.  But it worries me to
be used for Daedalus on the currently active reposities.

Bob


signature.asc
Description: PGP signature
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] installing Wine 32

2022-03-31 Thread Bob Proulx via Dng
terryc wrote:
> Haines Brown wrote:
> > I'm running Chimaera and trying to play an old CDROM. I assume
> > simplest way is to install wine.
>
> IME, it is generally about the worst way and/or problematic. At best,
> it will only work until something is upgraded.

Indeed.  That thought really made me grimace.

> What is on the cdrom?
> There might be better ways to access it.
> Like vlc for any multimedia stuff.

I also thought of using Sound Juicer to play the music from the cdrom.

However if the original poster thought, I will use wine, then I think
it more likely this "old CDROM" must be an old PC game for either
MS-DOS or MS-Windows.

Bob


signature.asc
Description: PGP signature
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] spam disposal

2022-03-27 Thread Bob Proulx via Dng
Haines Brown wrote:
> I'm running Cimaera and have procmail and spamassassin installed.
> I consructed a file ~/.procmailrc. In it ares the lines:
>
>   :0:
>   * ^X-Spam-Status: Yes
>   $HOME/mail/spam
>
> I created this spam directory and restarted spamassassin. Yet
> mail having the line "X-Spam: Yes" in it header is not
> redirected to ~/mail/spam

You need to debug it.  Is it running through SpamAssassin?  Is SA
tagging these messages as spam with that header?  Note that
SpamAssassin needs to have learned about at least 200 spam and
non-spam messages before the internal Bayes learning engine has
learned enough to be enabled.

Look at a spam message that has been passed through SpamAssassin.
What do the headers on the message say?  If are no headers then it did
not get passed through SpamAssassin and there was nothing for procmail
to match against.

Add debugging to the procmailrc file.

LOGFILE=$HOME/var/log/procmail.logfile
VERBOSE=on

I put that where I show in my example.  Put that log file where you
have a directory and won't forget about it.  It will grow, and grow,
and grow, without bounds until the disk is full.  I trim mine daily
with a cron task.

45 8 * * *  savelog -q -d -C $HOME/var/log/procmail.logfile

Look in the log file to see what is happening.

Bob


signature.asc
Description: PGP signature
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] chimaera: no /usr/lib/tmpfiles.d/legacy.conf?

2022-03-23 Thread Bob Proulx via Dng
Alexander Bochmann wrote:
> ...on 2022-03-21 21:31:25, Bob Proulx via Dng wrote:
>  > However AFAIK the /var/lock/subsys directory is a Red Hat specific
>  > directory such as found on RHEL and Fedora for use for their init
>  > scripts.  It doesn't have a use by other scripts.  It is created by an
>
> Actually what brought me on this track turns out
> to be the init script for acct, which has:
>
>  > LOCKFILE=/var/lock/subsys/acct
>  > [..]
>  > touch $LOCKFILE

In that case it would be easy to either modify that variable or to
have the script mkdir the directory.  Perhaps this added to that
script?  Right before the existing touch?

mkdir -p /var/lock/subsys
touch $LOCKFILE

I am assuming that software is not native Devuan software.  Because
the Devuan/Debian accounting package should work without these
modifications.  But since I am assuming this has been ported from a
Red Hat system then it's likely that it might source a
/etc/sysconfig/acct or similar file.  In which case I would look to
see if it is possible to override the setting there.  Since I think
you would have seen an error on it not existing already in that case.
Maybe.  I am making too many assumptions here.

> This is in turn run each night from the acct cron.daily
> script which rotates the accounting log files and restarts
> the service, producing a message that /var/lock/subsys/acct
> can't be touched...

Update to correct a mistake I made in my posting.

>>mkdir /var/lock
>>ln -s ../../run/lock /var/lock/subsys

I don't know why but when I looked I didn't see that on Devuan systems
/var/lock is a symlink too.  It's already a symlink.

lrwxrwxrwx 1 root root 9 May 25  2011 /var/lock -> /run/lock

For whatever reason I was confused and thinking it did not exist.
Which caused me to give poor advice for the system tweak workaround.
Because that would error on the mkdir (harmlessly) and put the symlink
in /run/lock/subsys which is a dynamically created tmpfs and it won't
persist across reboots.  It just won't work as I had suggested.

Since /var/lock on Devuan is already a symlink to /run/lock that part
is okay and working without tweaks needed.  However the foreign system
init script touching /var/lock/subsys/acct is probably the easiest to
modify to mkdir -p there before the touch instead.  Then it will
always be created if needed and it will be needed once after every
system reboot.

Meanwhile...  For Daedalus Testing and newer systems...  A friend sent
me a direct reply with another alternate suggestion to install
"systemd-standalone-tmpfiles" which would be especially good if the
foreign software uses /etc/tmpfiles.d/acct.conf or similarly named
file already to create files.  It exists starting in Daedalus Testing
and Ceres Unstable now but does not exist in Beowulf or Chimaera
though there is a backport for Chimaera.

I don't know very much about this systemd-standalone-tmpfiles package
and haven't tried it myself.  But certainly if there were non-native
software that already expected to use the systemd tmpfilesd interface
in /etc/tmpfiles.d/* and one is running a Daedalus or newer system
then this seems like a reasonable solution to work this problem too.

>  > And then a piece of software meant for Red Hat would run happily
>  > without knowing it was on Devuan.  At least for this portion.
>
> Thanks for the detailed explanation!

Happy to help!  And I hope this discussion helps! :-)

Bob


signature.asc
Description: PGP signature
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] chimaera: no /usr/lib/tmpfiles.d/legacy.conf?

2022-03-21 Thread Bob Proulx via Dng
Alexander Bochmann wrote:
> ...on 2022-03-21 23:33:41, Alexander Bochmann wrote:
>  > I am running some software that expects the /var/lock/subsys directory
>  > to exist. It seems that this (and a few other directories) are created
>  > by /usr/lib/tmpfiles.d/legacy.conf, which is owned by systemd in Debian,
>  > and doesn't seem to exist in Devuan, or at least on my machine?
>
> Hrm, sending that mail out might have been slightly premature,
> because it seems the whole tmpfiles.d stuff is a systemd thing
> all around, and probably isn't even used on Devuan? I got confused
> by existing config files which are shipped by a bunch of the
> Debian packages...

It happens to all of us at times.  No worries.

However AFAIK the /var/lock/subsys directory is a Red Hat specific
directory such as found on RHEL and Fedora for use for their init
scripts.  It doesn't have a use by other scripts.  It is created by an
init script at start time and removed by the init script at stop time.

less /etc/rc.d/init.d/network  # On a RHEL system...

start)
...
touch /var/lock/subsys/network
stop)
...
rm -f /var/lock/subsys/network

It's used as a semaphore to ensure that init scripts don't run twice.
The network could be started and then started again but since it had a
semaphore in /var/lock/subsys then it would not run twice.

Also on RHEL systems at least it is a symlink.

lrwxrwxrwx. 1 root root 11 Jul  8  2019 /var/lock -> ../run/lock

I think you are hitting a typical problem of porting software designed
for one OS onto a different OS that has different conventions.  That's
not an unusual situation.  It just means that the software and/or the
system needs a tweak to make things work.

I'll assume for the moment that this is only being used in an init
script.  If so then this is simply indicating that you will need to
port the init script to Devuan.  Which usually is pretty easy.  And
then after the init script has been created for Devuan there won't be
a need for using /var/lock/subsys as that wasn't ever used on Debian.

If this is not an init script and used in some other script or context
then I go out on a limb and say that it was probably buggy.  Because
nothing else should have been using that directory.  But a lot of
corporate code is pretty poor quality.

As a general statement about porting software it is sometimes easier
to modify the system.  Especially if use of something is buried layers
deep in things.  I have often made a symlinks as needed to make one
system look like another flavor of system.  Here it might be easy to
create a /var/lock symlink just like on Red Hat systems and perhaps to
also create a subsys directory.  Here I might be inclined to create
this adaptor layer.  After looking to ensure there isn't a naming
collision, probably won't be, then this can be statically created.

mkdir /var/lock
ln -s ../../run/lock /var/lock/subsys

The /run and /run/lock directories are already created upon each
boot.  Therefore there is no need to create them.  And since /var and
/var/lock and /var/lock/subsys would be static there is no need to put
anything into a boot time script as statically added is sufficient.
And then a piece of software meant for Red Hat would run happily
without knowing it was on Devuan.  At least for this portion.

Hope this helps on the topic of porting software. :-)
Bob
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Configuring ethernet port for IPv6 - broken hardware revealed by dmesg

2022-01-28 Thread Bob Proulx via Dng
Joel Roth wrote:
> Gregory Nowak wrote:
> > Is there anything different in the dmesg(1) output for eth1 than
> > for your other interfaces?
>
> bingo:
>
> [467072.902423] e1000e :00:19.0 eth1: Hardware Error
>...
> This is a used Thinkpad pad, recently purchased.
> Fortunately usb-ethernet adapters are easily available.

This is very odd.  Thinkpads have been very robust and reliable and
used so much that they have been somewhat of a reference platform for
many years.  The e1000 driver family is very commonly used and is in
the kernel main.

I suggest booting a live boot image on your Thinkpad and seeing if the
wired interface works using one of those.  That would eliminate
something corrupted in your current installation.  It's easy to do.
It is a good quick second opinion on the network hardware.

Also, how did you install this system?  Did you use a "netinstall"
image originally?  Using the wired ethernet or WiFi?  Because if you
installed using wired ethernet (the most typical way I install) then
at install time the netinstall image was working okay with your wired
ethernet.

Booting a netinstall image in "Rescue Mode" (under the advanced
options) would also be a safe way to test if the hardware is working
under a live boot system.  And if this is a recent install using a
netinstall image then it's one you already have available.

Bob


signature.asc
Description: PGP signature
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Problems with SPF of dyne.org for this mailing list

2022-01-19 Thread Bob Proulx via Dng
Andrew McGlashan via Dng wrote:
> Not fixed?
> 
> Did anybody look at this.

Did you send a problem report to the mailing list owner?

There isn't much that us as users and participants of the mailing list
can do about hosting system problems.  Although there might be a
subscriber who is active who also might have admin access that usually
isn't the case.  Just lucky if there is.

For mailing lists the mailing list owner is the foo-owner address.  So
for this mailing list dng it would be dng-ow...@lists.dyne.org (which
if obscured will be dng-owner AT lists DOT dyne DOT org in unredacted
form).  Please send your problem report to the list admins.

Bob


signature.asc
Description: PGP signature
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] The Daedalus desktop needs some love

2022-01-19 Thread Bob Proulx via Dng
goli...@devuan.org wrote:
> Lars Noodén wrote:
> > What quality of display(s) and color calibration are required?
> 
> In all the years I have been doing this, that question has never entered my
> mind and I have no idea how to even begin answering it. I do "eye" art not
> "machine" art. I can perceive even one increment change in a hex.
> 
> Problem is . . . no one can know exactly what color another person is
> seeing. Add to that the vagaries of the monitor and . . .
> 
> I don't know if a screenshot would capture the hex or what's showing on your
> monitor but maybe you could give it a try for the chimaera desktop and let
> us have a look.

I just want to comment that I have two identical model displays side
by side in a dual monitor configuration on my desktop.  Both are
identical as far as any model vendor and number are concerned.  Yet
side by side it is pretty obvious to me that they have a difference in
color tone between them.  They are definitely not the same even though
by specification they will be the same.

The first order difference in my two monitors I think is that the
backlight is not identical between them.  One shows a slightly warmer
color hue to the backlight from the other.  I think that swamps other
effects causing differences in my "matched pair".

None of this really has any effect on how nice a color theme looks on
the displays though.  That's an art project more than a science project.

Bob


signature.asc
Description: PGP signature
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] nftables firewall and fail2ban replacement.

2022-01-15 Thread Bob Proulx via Dng
onefang wrote:
> I've been using shorewall and fail2ban for a while now, but nftables is
> soon replacing iptables, so it's time to consider some options.

Fortunately through the current today's Unstable there is no problem
with the use of iptables.  But I have also been wondering what I am
going to do for a firewall when at some inevitable point I must switch
from using Shorewall to something different.

> My main problem with fail2ban is that it fails to ban.  Or rather it does
> ban, for that one rule I wrote myself, but not for any of the built in
> rules, but then it releases the ban, even though I have told shorewall to
> ban that particular IP.  So the IP ends up being unbanned, coz fail2ban
> says so.

By default fail2ban's packaging only enables one sshd rule.  All of
the others are disabled by default unless they are explicitly enabled.
Here is a look on a test system with everything just now freshly
installed.

root@turmoil:~# fail2ban-client status
Status
|- Number of jail:  1
`- Jail list:   sshd

My Internet connected systems have more enabled but I only have
enabled other local rules that I have written myself.

> Right now there's a particular IP hitting that one rule, and no matter
> what I do, even completely zapping fail2ban's database and leaving it
> turned off, that IP keeps bypassing my firewall somehow.

I think something different must be happening because this doesn't
match with the way shorewall and fail2ban work with each other.

For example with both installed and active there might be the
following set of iptables rules.  I set up a victim system so that I
could work through a test case.

iptables -nL | less
...

Chain INPUT (policy DROP)
target prot opt source   destination 
f2b-sshd   tcp  --  0.0.0.0/00.0.0.0/0multiport 
dports 22
net-fw all  --  0.0.0.0/00.0.0.0/0   
ACCEPT all  --  0.0.0.0/00.0.0.0/0   
DROP   all  --  0.0.0.0/00.0.0.0/0ADDRTYPE 
match dst-type BROADCAST
DROP   all  --  0.0.0.0/00.0.0.0/0ADDRTYPE 
match dst-type ANYCAST
DROP   all  --  0.0.0.0/00.0.0.0/0ADDRTYPE 
match dst-type MULTICAST
LOGall  --  0.0.0.0/00.0.0.0/0limit: up to 
1/sec burst 10 mode srcip LOG flags 0 level 6 prefix "INPUT REJECT "
reject all  --  0.0.0.0/00.0.0.0/0   [goto] 

Chain f2b-sshd (1 references)
target prot opt source   destination 
REJECT all  --  192.168.230.120  0.0.0.0/0reject-with 
icmp-port-unreachable
RETURN all  --  0.0.0.0/00.0.0.0/0   

Chain net-fw (1 references)
target prot opt source   destination 
dynamicall  --  0.0.0.0/00.0.0.0/0ctstate 
INVALID,NEW,UNTRACKED
tcpflags   tcp  --  0.0.0.0/00.0.0.0/0   
ACCEPT all  --  0.0.0.0/00.0.0.0/0ctstate 
RELATED,ESTABLISHED
ACCEPT icmp --  0.0.0.0/00.0.0.0/0icmptype 8
ACCEPT icmp --  0.0.0.0/00.0.0.0/0icmptype 11
ACCEPT tcp  --  0.0.0.0/00.0.0.0/0multiport 
dports 22,80,443
ACCEPT udp  --  0.0.0.0/00.0.0.0/0udp dpt:123
DROP   all  --  192.168.93.370.0.0.0/0
DROP   all  --  0.0.0.0/00.0.0.0/0ADDRTYPE 
match dst-type BROADCAST
DROP   all  --  0.0.0.0/00.0.0.0/0ADDRTYPE 
match dst-type ANYCAST
DROP   all  --  0.0.0.0/00.0.0.0/0ADDRTYPE 
match dst-type MULTICAST
LOGall  --  0.0.0.0/00.0.0.0/0limit: up to 
1/sec burst 10 mode srcip LOG flags 0 level 6 prefix "net-fw DROP "
DROP   all  --  0.0.0.0/00.0.0.0/0   

In the above I intentionally triggered a fail2ban too many auth
failures for sshd so that there would be a f2b-sshd chain created.

Packets enter the INPUT chain.  For all port 22 ssh packets they
proceed down the "f2b-sshd" fail2ban sshd rule chain.  That chain has
the one IP that I explicitly triggered.  Packets in that chain
matching the banned IP address are then rejected.  Otherwise it falls
through to the return and continues processing down the INPUT chain.

The next chain is "net-fw" which came from my Shorewall rules file.
that chain shows a minimal set of rules to allow other ports.  I added
a drop rule for IP address 192.168.93.37.  It would get dropped there.

When the ban expires that IP address will be removed from the f2b-sshd
chain.  That chain being empty will return immediately to continue
processing the INPUT rule chain.  Which will then process through the
Shorewall definied net-fw chain hitting the 

Re: [DNG] merged /usr breakage

2022-01-06 Thread Bob Proulx via Dng
Didier Kryn wrote:
> Hendrik Boom a ecrit :
> > > > software that isn't properly packaged as a .deb, but instead has an
> > > > "installer" that needs to be run as root.

Immediately I think of all of those script "installers" that request
the user do this and similar to install their software as root this way.

wget -O- http:/example.com/foo.sh | bash

How many projects do this?  Hundreds?  Thousands?

In real life I have encountered many CAD/EDA tool vendors with
installation scripts that casually make system modifications not
knowing what they do.  I try to keep those contained.

In real life I have encountered sysadmins who have casually modified
modules, python in this case but it could have been other, in /usr/lib
outside of the package manager or any tracking.  Then later normal
machine upgrades were broken because newer modules were broken by
upgrading older ones.  If those had been made into /usr/local instead
it would have been both visible and would not have been broken by
normal system upgrades.

Being more than twice burned I am extremely shy now...

> > > If the installer must be run as root, it is precisely because it needs
> > > to install software in /usr.

Or into /usr/local which now requires root.  Back in the better days
of Debian it used to be possible for a user of group staff to install
into /usr/local without full superuser access.  But that's gone from
the installation now.

https://bugs.debian.org/484841#62

Since that has been removed in favor of using full root for everything
it removes a useful safety net layer.  For example this statement.

Russ Allbery writes in comment #77 in favor of using full root
instead of a more limited group staff.

I would prefer to drop the writeability of /usr/local by staff
personally.  I don't think it serves much useful purpose these days
given the existence of tools like sudo, and where it does, I think we
can work out a transition plan that will make it relatively easy for
sites to recreate the concept.

And the vote went against it.  So it's gone now.  It's root only.
Sigh.  On my systems I recreate the group staff concept and
implementation.  Because I do find it useful.

> > Such software should be installing to /opt, but might not.
>
> Installing application and configuration files in /opt is a possibility,
> but what to do with man page, application launcher, entry in the application
> menu? Installing in /opt does not require to mount /usr readonly. Just
> create a particular user account for /opt and use it to install. Even one
> user and one subdir per application.

Although I am not fully warmed up to /opt even after all of these
years for each of these questions there is a strategy to handle it.
PATH, MANPATH, desktop launcher files, and so forth.  But those are
all already set up for /usr/local by default.  /opt by the nature of
it being outside of the normal system then requires everything to be
set up for it.  Which is possible and easily done.  But then also must
be done if used.

> > > I have written such a software, called hopman. This discussion 
> > > suggests
> > > me that I should provide the option to install it in a user's directory,
> > > without the need to be root, rather than install it system-wide.

I would say yes.  If it is intended to be installed outside of being
packaged for the system then it should be easily installed both by
root (or the classic group staff) in /usr/local or by the user as
non-root installed into the user's $HOME.

Back in 2019 Didier Kryn wrote:
> cd hopman/hopman-1.0
> make && make install # You must be root to install 
> Installed files: /usr/bin/hopman, ...

I didn't follow things beyond this so do not know how things evolved,
and it isn't fair of me to reach back into the original if it has
improved and evolved since then.  Sorry.  My bad.  But in the above it
really should install that into /usr/local/bin (or sbin) by default
instead of /usr/bin.

For my own environment I would run that as myself in group staff which
can write to /usr/local/bin without root.  I would run it.  It would
fail.  I would notice that it was trying to install into /usr/bin.  I
would review and inspect.  I would then make adjustments so that on my
system it installed into /usr/local.  Having a read-only /usr in order
to detect these issues by preventing them is useful.  In my case
readonly is achieved by not being root.  But since we are training a
new generation that one must be root for everything then mounting
/usr read-only kicks the can down the road and pushes the problem
around to a different place.  But root can always remount it
read-write.  And the arms conflict continues.  Is Qubes the logical
conclusion?

https://www.qubes-os.org/

I also do not know the design of this particular tool hopman.  It may
require by the nature of it an installation into the root file system
at the system level.  For example if it needs to run as a root 

Re: [DNG] Pipewire and PulseAudio: apulse & firefox

2022-01-04 Thread Bob Proulx via Dng
Antoine via Dng wrote:
> Steve Litt wrote:
> > With my /bin/sh (which I believe is dash) I had to change:
> > 
> > function whatever {
> > 
> > to
> > 
> > whatever() {
> > 
> > on all three function declarations.
> 
> Yes, now that you mention it, that tends to vary between shells. I should
> have included them.

Portable shell use would be "whatever() {" which I wish to encourage
by saying that here.  Please use only that form for /bin/sh scripts. :-)

The "function" keyword was introduced by ksh.  This syntax is also
supported by bash and zsh but not by other shells.  When "function" is
used then the #! line must be /bin/ksh, /bin/bash, or /bin/zsh.

The ksh book refers to the other format as compatible "with recent
versions of the Bourne shell" (which I find funny now) because as we
know the original Bourne shell did not support functions at all.  The
book also reminds us that "{" and "}" are reserved words.  Meaning
that they must be separated from other tokens by whitespace in order
to be recognized.  They form a shell compound-list of commands.

Bob


signature.asc
Description: PGP signature
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] What not to back up

2021-11-28 Thread Bob Proulx via Dng
Mike Tubby wrote:
> ... but if you run a nameserver you may well need:
> 
> /var/cache/bind
> 
> as that's where your zonefiles are ;-)

Sorry.  No.  I am curious what led you to that conclusion?

By default in the Debian packaged configuration only the cached zone
files downloaded on secondaries are located there.  (The upstream BIND
does not specify a default location.  This is a distro package default
location.)  Since it is a package default the local admin may also
change it to any other location they wish.  But /var/cache is as good
of a location as any for cached files.

And therefore by all means delete that directory any time you feel
like doing so and recreate it empty.  It's not important and does not
require being backed up.  It only needs to exist so the named has a
location to store and cache files (that have a TTL too) downloaded
from the primary.

If that directory is empty then upon start the BIND named will request
a fresh download of all of the zones it is configured for as a
secondary nameserver and will cache them in that directory again.  If
the named is not configured as a secondary then that directory will be
empty of zone files.

For DNS primaries one specifies the source zone file using the
named.conf "file" directive.  Put that file anywhere you wish to put
it.  But putting that in /var/cache/bind would be a very poor choice
in my opinion.

Example of actually doing this.

file "/etc/bind/db.proulx.com";

Since this is an option that must be configured when setting up a
primary zone then you can put those source zone files anywhere you
decide is the place to keep the source of them.  I highly recommend
etckeeper for all of /etc and therefore I prefer to keep source there
where etckeeper can track them.

Bob


signature.asc
Description: PGP signature
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] system administration of non-systemd distros and releases

2021-11-20 Thread Bob Proulx via Dng
Peter Duffy wrote:
> I've recently been asked to recommend an upgrade route for a number of
> linux servers, and I proposed going to devuan. In response, I've had a
> concern raised which took me by surprise. It was suggested that in the
> future, it may not be possible to find staff who have the skills to
> administer and manage servers running non-systemd or pre-systemd
> distros/releases.

That's actually a concern that I think is valid.  Sad.  But valid.
However it depends upon the scale of the timeline.

You are working with them right now.  Are you expecting to do so over
the next 3 months?  Six months?  Year?  Two years?  And so on...  In
the next release cycle they have expertise.  And that expertise says
to go with Devuan.

In 10 years who is to say?  In 10 years if systemd continues to be
required by the most popular distros then, just like MS training a
generation with Windows, it might be difficult to find expertise
outside of that mainstream.  But that is 10 years away.  The future is
very hard to predict correctly.

> I've tried to give reassurance - but I'm still wondering if this could
> be a valid concern. I'd always taken the view that it's primarily the
> linux sysadmin community which is trying to stop the onslaught of the
> systemd juggernaut - but obviously, the greater the proportion of
> servers running systemd-based distros/releases, the less staff get
> exposed to non-systemd management techniques and tools.

I don't know what type of activity or environment this is in but I can
guess that there is software and that it is running.  Is it a web
server?  Is it a C mill?  Is it a point-of-sale system?  Whatever.
That they are now talking about upgrading makes me think they get
entrenched and then stick there for a long time.  They are not keeping
updated.  As long as the system is running the software they need they
would prefer to freeze on that and keep it stable for a long time.
But now they are doing a lump-effort upgrade.

I would reassure them this way.  I would assure them that the software
they are running now is portable on either type of system.  (Is it?  I
assume so.)  In which case it runs now on a system without systemd.
And in the future if one were to replace the hosting OS with one
running systemd then the software would run there too.  Right now the
best wisdom is that the hosting OS is most reliable with Devuan.  And
with the hosting OS being reliable then their software will be most
reliable running upon it.

A decade later if a new generation arrives and if they only know
systemd systems then in that future time they could change the hosting
OS to one they think is the better choice at that time.  It's all
about providing the most reliable hosting system for the task that
needs to be done.  In the past it was different than it is now.  We
now have our current systems.  In the future it may be different.  But
in the here and now if you think the best option is Devuan then that
is the best option now.  And doing that now does not preclude making a
different decision later.

Ten years ago we would have all chosen Debian as the right answer for
the hosting OS at that time.  We would have been right.  But here we
are a decade later.  The future is hard to accurately see.  It's
always in motion.

> I'd be grateful for thoughts and comments.

It is the tyranny of the default.  People will install the most
popular whatever.  And it will have defaults.  And whatever those
defaults are those will be the most common use configuration.  Very
few will actually change the defaults.

Most of us here though are more likely to be the people who do not see
the defaults as hard limitations.  We will open things up.  We void
warranties.  We make modifications.  We would rather be pilots than
passengers.

Bob


signature.asc
Description: PGP signature
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] mutt and html

2021-11-03 Thread Bob Proulx via Dng
Hendrik Boom wrote:
> ~/.muttrc:
> 
> set mailcap_path="~/.mailcap"

Strange.  This is not needed to be set for me.  Does that setting
override the default use of /etc/mailcap?

> auto_view text/html
> ...
> Unfortunately it now prefers the .html version to the plaintext version.
> I presume I need an alternative_order line.

I have this comment in my own file.

# The problem with autoview is that it then preferrs the text/html
# part instead of the text/plain part.  Instead use 'm' to view the
# part with the associated mailcap.
# auto_view text/html

I don't know if there is a better way to configure it.  But because of
not being able to deduce a way to avoid this undesired behavior I
avoid that configuration and use an explicit 'm' to mimecap-view the
html part explicitly instead.

> ~/.mailcap:
> 
> #text/html ; chromium %s
> text/html ; firefox-esr -no-remote %s ; copiousoutput

For firefox-esr the "copiousoutput" does not really follow as that is
to tell mutt to pager through the output using the builtin text pager
itself.  Which doesn't really apply since firefox-esr opens a
graphical window on $DISPLAY.

Also this is missing a test of $DISPLAY being set.  Which means that
it will activate but fail if one is using ssh (without -X) or
otherwise when DISPLAY is not set.  And also I admit to cargo culting
here but all of the stock entries use nametemplate.

I suggest starting with the stock entries for firefox-esr:

$ grep firefox-esr /etc/mailcap
text/html; /usr/bin/firefox-esr %s; description=HTML Text; test=test -n 
"$DISPLAY";  nametemplate=%s.html
text/xml; /usr/bin/firefox-esr %s; description=XML Text; test=test -n 
"$DISPLAY";  nametemplate=%s.xml
image/png;  /usr/bin/firefox-esr %s; description=PNG Image; test=test -n 
"$DISPLAY";  nametemplate=%s.png
image/jpeg;  /usr/bin/firefox-esr %s; description=JPEG Image; test=test -n 
"$DISPLAY";  nametemplate=%s.jpeg
image/gif;  /usr/bin/firefox-esr %s; description=GIF Image; test=test -n 
"$DISPLAY";  nametemplate=%s.gif

And then modifying those to add the -no-remote option or other option as 
desired.

Bob


signature.asc
Description: PGP signature
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] mutt and html

2021-11-02 Thread Bob Proulx via Dng
Hendrik Boom wrote:
> Until upgraded from ascii to beowulf, HTML messages were tolerable.
> Either they were obvious crap, in which case I just deeted them,
> or they seemed like they were worth viewing, in which case they were 
> usually html attachments, and I could seen the list of attachments by 
> typeing v and selecting the html version -- in which case it would 
> open for me in the chromium browser.
> 
> This stopped working after the upgrade to beowulf.

Until I read that you are connecting with ssh -X I thought that this
is almost certainly due to changes in /etc/mailcap handlers.  You
might review them on your system anyway.  It is possible they have
changed for you with the upgrade and that is why your html handling
has changed.

[[ Time to advocate for "etckeeper".  If that is installed then one can
look at the changes over time to any particular file in /etc.  This is
the command line way to view the "patches" made to a file.

cd /etc
git log -p mailcap | less

Meanwhile...  I normally just emacs the file, browse the version
history inside the Emacs git blame interface, C-x v g, and navigate
through previous and next versions.  I am sure there are many other
git version history browsers available. ]]

Here is a sampling of html handlers from my system, which will not be
typical since every system will have different ordering based upon the
ordering in which packages were installed and removed.  Packages add
and remove handlers as they are modified.  But useful for the idea.

$ grep html /etc/mailcap
text/html; /usr/bin/sensible-browser %s; description=HTML Text; 
nametemplate=%s.html
text/html; /usr/bin/firefox %s; description=HTML Text; test=test -n 
"$DISPLAY";  nametemplate=%s.html
text/html; /usr/bin/firefox-esr %s; description=HTML Text; test=test -n 
"$DISPLAY";  nametemplate=%s.html
text/html; /usr/local/bin/chromium %s; test=test -n "$DISPLAY"
application/xhtml_xml; /usr/local/bin/chromium %s; test=test -n "$DISPLAY"
text/html; /usr/bin/chromium %s; test=test -n "$DISPLAY"
application/xhtml_xml; /usr/bin/chromium %s; test=test -n "$DISPLAY"
text/html; dwb %s; test=test -n "$DISPLAY"
application/xhtml+xml; dwb %s; test=test -n "$DISPLAY"
text/html; midori --private %s; test=test -n "$DISPLAY"
application/xhtml+xml; midori --private %s; test=test -n "$DISPLAY"
text/html; /usr/bin/elinks -force-html %s; needsterminal; description=HTML 
Text; nametemplate=%s.html
text/html; /usr/bin/links %s; needsterminal; description=HTML Text; 
nametemplate=%s.html
text/html; /usr/bin/w3m -T text/html %s; needsterminal; description=HTML 
Text; nametemplate=%s.html
text/html; /usr/bin/elinks -force-html -dump %s; copiousoutput; 
description=HTML Text; nametemplate=%s.html
text/html; /usr/bin/links -dump %s; copiousoutput; description=HTML Text; 
nametemplate=%s.html
text/html; /usr/bin/lynx -force_html %s; needsterminal; description=HTML 
Text; nametemplate=%s.html
text/html; /usr/bin/w3m -I %{charset} -dump -T text/html %s; copiousoutput; 
description=HTML Text; nametemplate=%s.html
text/html; /usr/bin/html2text %s; copiousoutput; description=HTML Text
text/html; /usr/bin/lynx -dump -force_html %s; copiousoutput; 
description=HTML Text; nametemplate=%s.html

I keep a personalized ~/.mailcap file with most of the selections I
care about.  I then uncomment the one I want to be using at the
moment if I want to do something different and want to force my choice.

# text/html; /usr/bin/firefox %s; description=HTML Text; test=test -n 
"$DISPLAY";  nametemplate=%s.html
# text/html; /usr/bin/elinks -dump -dump-color-mode 3 %s; copiousoutput; 
description=HTML Text; nametemplate=%s.html
# text/html; /usr/bin/elinks -dump %s; copiousoutput; description=HTML 
Text; nametemplate=%s.html
text/html; /usr/bin/links -dump %s; copiousoutput; description=HTML Text; 
nametemplate=%s.html
# text/html; /usr/local/bin/iceweasel '%s'; description=HTML Text; 
test=test -n "$DISPLAY";  nametemplate=%s.html
# text/html; /usr/bin/lynx -force_html %s; needsterminal; description=HTML 
Text; nametemplate=%s.html
# text/html; /usr/bin/lynx -dump -force_html %s; copiousoutput; 
description=HTML Text; nametemplate=%s.html
# text/html; /usr/bin/w3m -no-mouse -T text/html %s; needsterminal; 
description=HTML Text; nametemplate=%s.html

Currently I am forcing "links -dump" rendering as that is doing pretty
well for the types of html mail I am receiving at the moment.  For
most html mail I am receiving the text rendering of the message is
more than sufficient.  There are hard cases of fluff and glitter
though.  If I have a problem I will simply edit that ~/.mailcap file
and change the selection and try one of the others viewers.  Some work
better than others.

If I want to open the attachment in a real graphical browser then I
perform a manual action.  I 'v'iew the MIME parts, 

Re: [DNG] Review of documentation needed

2021-09-22 Thread Bob Proulx via Dng
g4sra via Dng wrote:
> I opt to live dangerously..(shove an '-r' in there too if you feel like it)
> $ rm -i .* *
> this way you can delete illegitimate entries without hacking the filesystem.

That is definitely dangerous indeed.  For example if for whatever
reason one happens to have "-f" as a file name then it will override
the "-i" listed as a command line option and then all files will be
deleted without question.

$ mkdir /tmp/testdir
$ cd /tmp/testdir
$ touch ./-f .foo foo
$ echo rm -i .* *
rm -i . .. .foo -f foo
$ rm -i .* *
rm: cannot remove '.': Is a directory
rm: cannot remove '..': Is a directory
$ ll -a
-rw-rw-r--  1 rwp  rwp0 Sep 22 14:57 -f
drwxrwxr-x  2 rwp  rwp   60 Sep 22 14:57 ./
drwxrwxrwt 11 root root 760 Sep 22 14:57 ../

Where is .foo?  Where is foo?  There was no -i prompt.  The -f file is
still on disk and not removed?  All correct behavior given the
command as given.

When dealing with file globs like "*" it is always better to prefix it
with "./" as in "./*" so as to avoid the first character having any
possibility of matching a dash and being interpreted as an option.
You make your own luck! :-)

[[ I have seen people intentionally leave a -i file in their home
directory so as to intentionally have rm * cause it to be interpreted
as an option.  I recommend not relying upon it though. ]]

Bob


signature.asc
Description: PGP signature
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] /run/user/#

2021-08-31 Thread Bob Proulx via Dng
Luciano Mannucci wrote:
> I have two Beowulf systems that should be identical. If I issue "df"
> one does report among mounted filesystems a /run/user/xxx, where xxx
> is the id number of the user I am connected with (via ssh) and the
> other doesn't. As uname -a they both report:

I also don't desire that configuration and so also remove it from my
system.  But I automate this and have for so long that I have
forgotten the root of it.  But I believe this is due to the action of
PAM (Plugable Authentication Modules) which connect login session
accounts to libpam-systemd.  Or at least it did at one time.  IIRC.

Compare the contents of /etc/pamd.d on the two of your systems and see
what is different about them.  The one with /run/user will have PAM
modules that the one without does not have.  Pretty sure.  I think.

If it were me I would copy the /etc/pam.d of one system to the other
and then diff the two.

rsync -av othersystem:/etc/pam.d /tmp/junk/
diff -ru /tmp/junk/pam.d/ /etc/pam.d/

My old private notes from Stretch or around then say this following.
These are old notes and package names have mutated since then.

# File /etc/pam.d/common-session is modified by libpam-systemd
# postinst script to add the following line.
#   session  optionalpam_systemd.so
# See man pam_systemd(8) for more details.
# Note: pam_systemd.so creates /run/user/$(id -u) files.

Also my problem at the time was that X would not start with this PAM
configuration in place and I needed to remove it in order to get X to
start.  Because it needed all of the rest of systemd installed and of
course I didn't have it installed.  Therefore I needed to remove it.
But in Devuan elogind has replaced logind in order to address this.

I expect that your system with /run/user has a configuration like this
that your system that does not have /run/user does not have.

Caution: I am not really up to date on the current state of things
here.  I am just pretty sure that that the /run/user stuff is
configured through PAM and is set up through elogind/logind through
the /etc/pam.d/* interface somewhere.  I remove those entry lines from
pam.d files and it avoids that configuration.

Also starting X can be related and I have xserver-xorg-legacy
installed.  I would also cross-check whether it is installed and on
which ones of your two systems.

Bob


signature.asc
Description: PGP signature
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] manpages

2021-08-30 Thread Bob Proulx via Dng
Hendrik Boom wrote:
> . wrote:
> > > Looking for the gcc..g++ suite manpages, expected to find them in 
> > > 'gcc-doc' but it's missing.
> > > Anybody point me in the right direction ?
> > 
> > I found the answer in
> > https://unix.stackexchange.com/questions/523079/what-packages-the-man-pages-for-gcc-on-debian-10-buster-testing
> > 
> > Basically, add the "contrib" and "non-free" repositories to your deb
> > sources, then install "gcc-doc".
> 
> So are man pages not considered free nowadays?

cat /usr/share/doc/gcc-doc/README
...
Documentation itself is distributed under the terms of the
GNU Free Documentation License (with invariant sections or
front/back cover texts), which is not DFSG-free,
so documentation could not be distributed in Debian main.

However, for people who need the documentation, it is available
from non-free section of the Debian archive.
/usr/share/doc/gcc-doc/README (END)

Documentation is free when it comes licensed under a free license.
The GNU Free Documentation License *may* be a free license but only
when there are no invariant sections.  Adding those turns it into a
non-free license.  Multiple GNU Project packages have this problem.  :-(

Tragicomically for many people that non-free documentation from the
GNU Project requires adding the non-free suite to their sources.list
is the only reason they now require non-free!

Bob


signature.asc
Description: PGP signature
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] Download a package, maybe a few, for a different architecture and a different release.

2021-08-19 Thread Bob Proulx via Dng
Hendrik Boom wrote:
> I need to download a package to install on a machine that is currently 
> air-gapped.

I know you have already solved your problem.  But for the future you
might consider the apt-offline package.

$ apt-cache show apt-offline

 
Package: apt-offline
Depends: python3:any, apt, less, python3-magic
Description-en: offline APT package manager
 apt-offline is an Offline APT Package Manager.
 .
 apt-offline can fully update and upgrade an APT based distribution without
 connecting to the network, all of it transparent to APT.
 .
 apt-offline can be used to generate a signature on a machine (with no 
network).
 This signature contains all download information required for the APT 
database
 system. This signature file can be used on another machine connected to the
 internet (which need not be a Debian box and can even be running windows) 
to
 download the updates.
 The downloaded data will contain all updates in a format understood by APT 
and
 this data can be used by apt-offline to update the non-networked machine.
 .
 apt-offline can also fetch bug reports and make them available offline.

Bob


signature.asc
Description: PGP signature
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng


Re: [DNG] ntp setup

2021-06-19 Thread Bob Proulx via Dng
o1bigtenor via Dng wrote:
> Please  - - - anyone for a short writeup on how to
> install/start/whatever else to ntp without using systemd?

A full HOWTO page on what to do to fix this easily for your system is
this one line.

apt-get install ntp

Then best thing to do is to reboot so that everything can get started
off fresh with the correct time.

That's it.  That is the entirety of what is needed in a HOWTO
document.  Nothing more is needed.  The default configuration will
start automatically in Beowulf.  The default configuration will use
the standard pool of Internet time servers.  That's it.  Done.

Some special case situations though.  Raspberry Pi and other machines
without a battery backed up real time clock.  Machines forcing DNSSEC
at boot time without a real time hardware clock.  But people running
on those systems should already know they are running on those types
of systems and do the right handling of the issues there.

Bob


signature.asc
Description: PGP signature
___
Dng mailing list
Dng@lists.dyne.org
https://mailinglists.dyne.org/cgi-bin/mailman/listinfo/dng