Re: systemd-timesyncd

2024-01-08 Thread Mark Fletcher
Is it supposed to be installed by the net-installer? There does not seem

> to be any man pages other than the bog std stuff. When I found the
> /etc/systemd/timesyncd I immediately asked the system for man timesyncd,
> got this:
> gene@coyote:/etc$ man timesyncd
> No manual entry for timesyncd
>

Try man systemd-timesyncd . Usually the systemd explanations are
systemd-. I’m not at a machine right now to check, but usually
that’s the case.

Mark


Re: GRUB -- Debian overrides? Or maybe I just don't understand it well...

2023-12-22 Thread Mark Fletcher
On Fri, 22 Dec 2023 at 10:54, Mark Fletcher  wrote:
>
> On Fri, 22 Dec 2023 at 07:41, Anssi Saari
>  wrote:
> >
> > Mark Fletcher  writes
> >
> > > The question is, what values are config_directory and prefix set to?
> >
> > Grub sets config_directory to point to the directory where it's reading
> > it's config from. In other words, /boot.
> >
> > But why not just use 40_custom? It copies whatever is in the file (after
> > the header) to your grub.cfg. Don't need to figure out what file goes in
> > what directory. It also keeps configuration in /etc instead of moving it
> > to /boot.
> >

That likely would have worked too, but I have just discovered that I
had a typo in my custom.cfg file, and when I fixed it, it worked.

Thanks all for your help with this.

Mark



Re: GRUB -- Debian overrides? Or maybe I just don't understand it well...

2023-12-22 Thread Mark Fletcher
On Fri, 22 Dec 2023 at 01:21, David Wright  wrote:
>
> What sort of mess? I would have thought Grub would ignore excess
> kernels dropped into /boot. I have a laptop here that has two
> bookworm netinst ISOs (release candidates) and a kernel and initrd
> (hd-media) for booting the ISOs, and they've been ignored through
> at least two kernel upgrades:
>
>  4096 Oct  9 22:49 /grub
>83 Aug 16 15:52  System.map-5.10.0-25-686
>83 Sep 28 23:25  System.map-5.10.0-26-686
>245147 Aug 16 15:52  config-5.10.0-25-686
>245200 Sep 28 23:25  config-5.10.0-26-686
>   703594k Apr 24  2023  debian-bookworm-DI-rc1-i386-netinst.iso
>   704643k Apr 28  2023  debian-bookworm-DI-rc2-i386-netinst.iso
>19920k Apr 27  2023  initrd.gz
>33580k Oct  7 12:36  initrd.img-5.10.0-25-686
>33588k Nov 27 18:46  initrd.img-5.10.0-26-686
>   5548224 Apr  8  2023  vmlinuz
>   4988160 Aug 16 15:52  vmlinuz-5.10.0-25-686
>   4990880 Sep 28 23:25  vmlinuz-5.10.0-26-686

It saw a Debian kernel (6.1.something) on / and
a LFS kernel (6.4.12 IIRC) on /. It
also saw candidate root filesystems on  and . And it proceeded to cartesian join them, so I got LFS
kernel with debian root filesystem, Debian kernel with Debian root
filesystem, LFS kernel with LFS root filesystem (specified by
/dev/sdc2 which the very next time it booted that disk was /dev/sda2)
and Debian kernel with LFS root filesystem (again specified by device
name). Obviously, only 2 of those are things I'd ever want to boot.
And, once I saw what it had done, I kinda slapped my forehead and
thought well yeah, how was it supposed to know not to do that...

>
> I've never run LFS; what does the menuentry in grub.cfg look like?
>

menuentry 'Linux From Scratch (12.0-systemd) (on /dev/sdc2)' --class linuxfromsc
ratch --class gnu-linux --class gnu {
insmod part_gpt
insmod ext2
set root='hd2,gpt1'
if [ x$feature_platform_search_hint = xy ]; then
  search --no-floppy --fs-uuid --set=root --hint-bios=hd2,gpt1 --hint-ef
i=hd2,gpt1 --hint-baremetal=ahci2,gpt1  
else
  search --no-floppy --fs-uuid --set=root 
957b66
fi
linux /vmlinuz-6.4.12-lfs-12.0-systemd root=PARTUUID=
}

That is AFTER my edits to replace root=/dev/sdc2 in the linux command
line with the PARTUUID. The FS UUIDs I elided above were put there by
GRUB. Again, Debian GRUB created this when I ran it with os_prober
turned ON. I grabbed this and copied it to custom.cfg, made the edit
to add the PARTUUID, then ran update-grub again with os_prober turned
off.

Ah hold on. Maybe os_prober is what is generating this menuentry
stanza in the first place, and grub is just using it. If that's the
case, I was asking the wrong question in the first place. Maybe the
question isn't why isn't grub using PARTUUID= in this situation, which
the manual says it will, but rather why isn't os_prober doing so?

>
> So what does this command show, if anything:
>
>   $ zgrep result: /var/log/messages*
>   /var/log/messages:Dec 21 18:10:52 acer 90linux-distro: result: 
> /dev/sda4:Debian GNU/Linux 12 (bookworm):Debian:linux
>   /var/log/messages:Dec 21 18:10:57 acer 40grub2: result: 
> /dev/sda4:/dev/sda4:Debian 
> GNU/Linux:/boot/vmlinuz-6.1.0-13-686:/boot/initrd.img-6.1.0-13-686:root=UUID=ac1b3d4f-aa95-4e12-b6e6-fd455273a3b8
>  ro quiet
>   /var/log/messages:Dec 21 18:10:57 acer 40grub2: result: 
> /dev/sda4:/dev/sda4:Debian GNU/Linux, with Linux 
> 6.1.0-13-686:/boot/vmlinuz-6.1.0-13-686:/boot/initrd.img-6.1.0-13-686:root=UUID=ac1b3d4f-aa95-4e12-b6e6-fd455273a3b8
>  ro quiet
>   /var/log/messages:Dec 21 18:10:57 acer 40grub2: result: 
> /dev/sda4:/dev/sda4:Debian GNU/Linux, with Linux 6.1.0-13-686 (recovery 
> mode):/boot/vmlinuz-6.1.0-13-686:/boot/initrd.img-6.1.0-13-686:root=UUID=ac1b3d4f-aa95-4e12-b6e6-fd455273a3b8
>  ro single
>   /var/log/messages:Dec 21 18:10:57 acer 40grub2: result: 
> /dev/sda4:/dev/sda4:Debian GNU/Linux, with Linux 
> 6.1.0-10-686:/boot/vmlinuz-6.1.0-10-686:/boot/initrd.img-6.1.0-10-686:root=UUID=ac1b3d4f-aa95-4e12-b6e6-fd455273a3b8
>  ro quiet
>   /var/log/messages:Dec 21 18:10:58 acer 40grub2: result: 
> /dev/sda4:/dev/sda4:Debian GNU/Linux, with Linux 6.1.0-10-686 (recovery 
> mode):/boot/vmlinuz-6.1.0-10-686:/boot/initrd.img-6.1.0-10-686:root=UUID=ac1b3d4f-aa95-4e12-b6e6-fd455273a3b8
>  ro single
>
> (you can use messages*, syslog* or user.log*)

Interestingly, the most recent output in /var/log/messages* was from
November, ie before i started playing with this. So here is what is in
/var/log/syslog*, confining attention to the most recent date:

/var/log/user.log:2023-12-18T18:37:47.192311+00:00 phantom 40lsb:
result: /dev/sdc2:Linux From Scratch
(12.0-systemd):LinuxFromScratch:linux
/var/log/user.log:2023-12-18T18:37:49.378939+00:00 phantom
90linux-distro: result: /dev/mapper/kazuki--vg-root:Debian GNU/Linux
10 (buster):Debian:linux
/var/log/user.log:2023-12-18T18:37:49.893237+00:00 phantom 90fallback:
result: 

Re: GRUB -- Debian overrides? Or maybe I just don't understand it well...

2023-12-22 Thread Mark Fletcher
On Fri, 22 Dec 2023 at 07:41, Anssi Saari
 wrote:
>
> Mark Fletcher  writes
>
> > The question is, what values are config_directory and prefix set to?
>
> Grub sets config_directory to point to the directory where it's reading
> it's config from. In other words, /boot.
>
> But why not just use 40_custom? It copies whatever is in the file (after
> the header) to your grub.cfg. Don't need to figure out what file goes in
> what directory. It also keeps configuration in /etc instead of moving it
> to /boot.
>

The only answer to that is, because Felix pointed me in the direction
of 41_custom earlier in this thread. I will have a closer look at
40_custom.

Mark



Re: GRUB -- Debian overrides? Or maybe I just don't understand it well...

2023-12-21 Thread Mark Fletcher
On Mon, 18 Dec 2023 at 22:15, Felix Miata  wrote:
>
> I can't answer why Grub scripts to what the do, because I don't really use 
> them,
> and don't need to understand much about them. Grub config files in 
> /boot/grub/ are
> akin to scripts, but they are really simple, mainly just command scripts. The
> usual one is grub.cfg, the one os-prober feeds from other Linux 
> installations. A
> less common one is custom.cfg. To use it requires the admin build it. When it
> exists, grub-mkconfig incorporates its use by/in grub.cfg. It actually gets 
> called
> by default from /etc/grub.d/41_custom, which adds the stanzas from it to the 
> Grub
> boot menu - after those that it has generated itself. I copy it to
> /etc/grub.d/07_custom, and empty 41_custom. That causes my custom stanzas to
> appear first in Grub's boot menu. /etc/grub.d/40_custom acts, and a copy of 
> it as
> 06_custom would act, in similar fashion, except that the admin's custom 
> stanzas
> are put into it by the admin instead of into a custom.cfg file.
>
> Thus, you, as admin, construct working stanzas however you like, with or 
> without
> UUIDS, with or without device names, with or without volume LABELS, however 
> you
> like boot to go, and they don't get changed, except by the admin - you. This 
> is
> easy, because you as admin can use the kernel (and initrd) symlinks Debian 
> puts in
> /, or anywhere you'd like symlinks to them to go, for distros that don't
> automatically create them for you. There's no need for maintenance when new
> kernels are installed in the case of Debian and other distros that 
> automatically
> generate new symlinks. For those that don't, creating them is trivial.
>

I have just tried this, I see 41_custom in /etc/grub.d and I see that
the text from that file ends up in my grub.cfg when I run update-grub.
So I have disabled os-prober, since I won't need it if I can get this
working, and created a cusom.cfg file with approximately what
os-prober generated as manuentry stanza lines for my LFS instance
(with references to os-prober removed and the root=/dev/sdc2 changed
to root=PARTUUID=)

And... on reboot, the menu entry for LFS is not included.

Now looking closer at 41_custom, it says this:

#!/bin/sh
cat <

Re: GRUB -- Debian overrides? Or maybe I just don't understand it well...

2023-12-21 Thread Mark Fletcher
On Thu, 21 Dec 2023 at 21:38, Mark Fletcher  wrote:
>
>
> So I rebuilt my LFS (was happy to do so, this is a learning exercise)
> with its own /boot partition, which gets me closer to the solution I
> want which is one Grub, Debian's grub, with Debian as the first and
> default boot choice, but LFS available as an alternative. And the only
> remaining problem is the Debian GRUB's insistence on using /dev/sdX2
> (for the root partition is the second partition on the disk) in the
> "linux" command line parameter.
>
Apologies -- I probably made it less clear rather than more with the
above -- I mean that Debian GRUB is insisting on using /dev/sdX2 _for
the LFS menu entry_. For its own menu entry it works fine, because my
bookworm installation is using LVM.

There is and ever has been only one GRUB on this system -- Debian's.
That's why I am asking about this on a Debian list. My goal here is to
configure Debian's GRUB to boot LFS as a secondary option alongside
the primary option of Debian, and have that survive kernel updates for
Debian, and I am there, except for persuading it not to specify
"root=/dev/sdX2" for the root filesystem in the LFS linux command
line, and instead persuading ti to specify "root=PARTUUID="
which grub-mkconfig's documentation says is what it will do when there
is no initrd and the GRUB_DISABLE_LINUX_{PART,}UUID variables are set
to false.

Mark



Re: GRUB -- Debian overrides? Or maybe I just don't understand it well...

2023-12-21 Thread Mark Fletcher
On Wed, 20 Dec 2023 at 06:01, David Wright  wrote:
>
>
> I can't see anywhere where the OP claims to have set up LFS for
> booting itself, as opposed to being booted from a Debian Grub.
> It only says "I have been able to get a grub.cfg including the
> LFS system …", which seems to imply LFS has only been set up
> as a "foreign" system by a Debian system.

Yes, that's exactly it. My very first attempt involved using Debian's
/boot partition as the /boot partition for LFS as well, so installing
LFS's kernel (6.4.12 IIRC) alongside Debian's, but I quickly learned
the folly of that when I saw the mess update-grub made of that...

So I rebuilt my LFS (was happy to do so, this is a learning exercise)
with its own /boot partition, which gets me closer to the solution I
want which is one Grub, Debian's grub, with Debian as the first and
default boot choice, but LFS available as an alternative. And the only
remaining problem is the Debian GRUB's insistence on using /dev/sdX2
(for the root partition is the second partition on the disk) in the
"linux" command line parameter.

>
> When os-prober runs on my system, a lot of stuff gets logged in
> messages, syslog and user.log. The lines that contain the string
> "result:" (without the quotes) are interesting. It's evident from
> those that have six fields following result: have had their root=
> field copied from the foreign system's grub.cfg. (In my case,
> "foreign" means a Debian system of the previous release.)
>
> When os-prober writes several clauses into my new grub.cfg's
> "### BEGIN /etc/grub.d/30_os-prober ###" section, the references
> to the partition are constructed using UUIDs (not PARTUUIDs, because
> there's an initrd). However, the kernel command line reads
> "root=LABEL=toto04", so that string wasn't constructed by os-prober,
> but copied from the foreign grub.cfg¹.
>
> That suggests to me the probability that whereas +Grub constructs+
> the root= strings for the "### BEGIN /etc/grub.d/10_linux ###"
> section, +os-prober copies+ the root= strings into the
> "### BEGIN /etc/grub.d/30_os-prober ###" section instead.
>

Interesting -- but there is no grub.cfg on the LFS system because grub
has never been installed there. There is a /boot partition but no
/boot/grub/grub.cfg .
So, nothing to copy from in this case.

Mark



Re: GRUB -- Debian overrides? Or maybe I just don't understand it well...

2023-12-21 Thread Mark Fletcher
On Wed, 20 Dec 2023 at 02:40, Felix Miata  wrote:
>
> Mark Fletcher composed on 2023-12-20 00:28 (UTC):
>
> > I am curious to know from Debian
> > GRUBbers (as it were) if the behaviour I am describing in this thread
> > is expected...
>
> I suspect few if any regulars here spend much time with Slackware.

I am genuinely confused about how Slackware came into the picture
here... my foreign OS is LFS, nothing to do with slackware as far as I
know...

I appreciate your initial help which I still think is my best hope of
a solution -- about to implement in a few minutes so will know for
sure shortly -- but have not responded to the rest of this message as
I think it is based on a misunderstanding.

Thanks

Mark



Re: GRUB -- Debian overrides? Or maybe I just don't understand it well...

2023-12-19 Thread Mark Fletcher
On Mon, 18 Dec 2023 at 22:15, Felix Miata  wrote:
>
> Thus, you, as admin, construct working stanzas however you like, with or 
> without
> UUIDS, with or without device names, with or without volume LABELS, however 
> you
> like boot to go, and they don't get changed, except by the admin - you. This 
> is
> easy, because you as admin can use the kernel (and initrd) symlinks Debian 
> puts in
> /, or anywhere you'd like symlinks to them to go, for distros that don't
> automatically create them for you. There's no need for maintenance when new
> kernels are installed in the case of Debian and other distros that 
> automatically
> generate new symlinks. For those that don't, creating them is trivial.
>
> 
> is a thread that goes through my UEFI system setups.
> --

Thanks very much for your help, I reckon I can make that work.

Is that the "official" answer? I am curious to know from Debian
GRUBbers (as it were) if the behaviour I am describing in this thread
is expected...

Thanks

Mark



GRUB -- Debian overrides? Or maybe I just don't understand it well...

2023-12-18 Thread Mark Fletcher
Hello

I need help with a problem configuring grub. My main OS on the system
concerned is bookworm (was probably originally installed as bullseye,
might even have been earlier, and then has been upgraded over the
years, now at bookworm). That system is the system that has installed
grub and grub's configuration gets updated whenever a kernel update
happens in the Debian stable ecosystem.

I have now set up an LFS (Linux from Scratch) installation on the same
box, on a brand new SSD installed into the machine for the purpose. So
I want to be able to dual-boot. I have been able to get a grub.cfg
including the LFS system but the root of the LFS system is being
specified by device name and that is a problem.

There are also a couple of existing, older SSDs in the machine, one of
which contains an older installation of Debian which I don't use any
more but haven't quite brought myself to delete (although will
eventually), and one is mounted as /opt in my main system.

Because I want to dual-boot with the LFS system I turned on the
os-prober in /etc/default/grub. Running update-grub then duly finds
the LFS system and sets up configuration in grub for it (it also finds
the old Debian system, which I'd prefer to keep out of grub but that's
not a big deal for now).

Both old and current Debian installations use LVM and initrd. The LFS
instance uses neither.

I just don't seem to be able to persuade update-grub to put
root=UUID= or root=PARTUUID= into the linux command line
for the LFS system. Because update-grub is going to be run every time
there is a kernel update in Debian, I need it to work and just
overriding it by hand isn't a solution.

Instead it is using root=/dev/sdX2. The problem is over the last few
boots I have seen the SSD containing the LFS system come up as
/dev/sda, /dev/sdb and on this boot now, /dev/sdc.

According to the grub-mkconfig documentation[1], if I make sure
GRUB_DISABLE_LINUX_UUID and GRUB_DISABLE_LINUX_PARTUUID are both set
to false, in the absence of an initrd it should use "root=PARTUUID=<>"
in the linux command line, but when I run update-grub that is not
happening, and the device identifier is being used instead.

Can anyone explain why, and how I can fix this in a way that will
still work the next time the bookworm kernel gets an update?

Thanks

Mark

[1] 
https://www.gnu.org/software/grub/manual/grub/html_node/Root-Identifcation-Heuristics.html#Root-Identifcation-Heuristics



Re: Gradle version in bookworm

2023-08-27 Thread Mark Fletcher
On Sun, 27 Aug 2023 at 07:52, Steve Sobol  wrote:

> On 2023-08-26 03:18, Mark Fletcher wrote:
>
> > continue using it -- but since I can get it onto my machine with zero
> > effort via Intellij
>
> I am a Jetbrains subscriber who uses many of their IDEs, including
> IntelliJ, and if that's the way you want to go, I'm certainly not going
> to tell you not to.
>
> But if you don't need IntelliJ, it seems silly to install it just to get
> Gradle.
>
> You can download the latest OSS version of Gradle from gradle.org.
>
> Am I missing something?
>


Oh, yes — the fact that I never suggested for a moment that the only reason
I installed IntelliJ was to get Gradle. I’ve been an IntelliJ user since
about 2006/7 or so.

What I said was that I noticed that IntelliJ was installing a version of
gradle to suit it, not using the platform-level Gradle, and was regularly
updating it. The point of that, if you read the thread, was that there was
a suggestion earlier in the thread that the reason gradle hadn’t been
upgraded in bookworm and was still on version 4 while upstream was on
version 8 was that it was terribly difficult to do, due to dependencies —
but clearly not, given the ease of installing it manually.

>
Hope that clears that up.

Mark

>
>


Re: Gradle version in bookworm

2023-08-26 Thread Mark Fletcher
On Sat, 5 Aug 2023 at 17:51, Roberto C. Sánchez  wrote:
>
> On Sat, Aug 05, 2023 at 05:21:55PM +0100, Mark Fletcher wrote:
> > Gradle is not some minority, hardly-used tool, so there is presumably
> > a reason why the package hasn't been updated in Debian. Anyone know
> > what it is?
> >
> Becuase it's Very Hard Work(TM).
>
> Updating Gradle in Debian was proposed as a project under Freexian's
> Project Funding initiative and it was accepted and work was done on it
> for several months:
> https://salsa.debian.org/freexian-team/project-funding/-/issues/19
>

Thanks, that makes interesting reading. On the surface, it's clearly _not_ a
major technical challenge to install considering how easy it is to install
manually.
After my original post I discovered that IntelliJ has been installing
successive
versions of it on my machine and the latest version installed Gradle 8.2
which
is the latest. So I will just use that going forward, and upgrade when I
update IntelliJ.

Looks like the issue for _packaging_ Gradle is some of its dependencies.
That
effort you provided a link to is / was about getting rid of some
proprietary enterprise
plugin, and seems to have uncovered a dependency chain via Kotlin that
slightly
bizarrely leads to a dependency on OpenJDK 8... Really? In 2023? If I am
reading
the discussion right it looks like the version of Kotlin we have in Debian
depends
on OpenJDK 8 and no one is stepping forward to update that. They are making
rumbling noises about dropping gradle from Debian. I like gradle and will
likely
continue using it -- but since I can get it onto my machine with zero
effort via
Intellij I don't seek to insist that Debian package it (and installing it
manually is
easy if I had to... which I don't). So I guess the way out of this current
situation
is either for gradle to be dropped from Debian or Kotlin to get an upgrade.
I'd lean
towards drop it except that there is an acknowledgement that Kotlin isn't
going
away and will need an upgrade at some point anyway...

Anyway, thanks, I understand the situation a little better now -- and also
I have
a path forward for my own work, so I am happy now.


Mark


Gradle version in bookworm

2023-08-05 Thread Mark Fletcher
Apologies if this is a dumb and/or frequently-answered question,
but... does anyone know why the version of the "gradle" build tool in
bookworm is version 4?

The version in bookworm, 4.4.1-18, was migrated to testing in January
of this year -- an upgrade from version 4.4.1-13 -- at which time the
latest upstream version was at least version 7. So it doesn't appear
to be a "package not maintained any more" issue... The maintainers
appear to be the Debian Java maintenance team, who I would assume
would be unlikely to wander off... Bug reports against the package are
beginning to show evidence that people are finding the age of the
package a problem as unit tests and even some build dependencies
become harder and harder to support.

Gradle is not some minority, hardly-used tool, so there is presumably
a reason why the package hasn't been updated in Debian. Anyone know
what it is?

Thanks!

Mark



Re: Overzealous polkit

2023-07-25 Thread Mark Fletcher
On Mon, Jul 24, 2023 at 22:55 Charles Curley <
charlescur...@charlescurley.com> wrote:

> A fresh install of Debian 12.1 on a Lenovo Yoga 13. I have firewalld
> installed, and firewall-config 1.3.0-1 to manage it. Polkit insists on
> authentication, which is fine. It then has extremely short timeouts (or
> something), so I have to keep re-authenticating. How do I tell it to
> not be so zealous?
>
> --
> Does anybody read signatures any more?
>
> https://charlescurley.com
> https://charlescurley.com/blog/
>
> I misread the subject line of this thread as “overzealous polecat” — and
thought “What’s this, a new release of Ubuntu or something?” :)

Mark


Re: Gnome desktop environment

2023-04-22 Thread Mark Fletcher
On Sat, 22 Apr 2023 at 20:53, William Torrez Corea 
wrote:

> I want to delete the Gnome desktop environment.
>
> *What command is used for an elimination complete?*
>
> I use this command but don't get the effect desired.
>
> # apt-get install task-gnome-desktop
>
>
Well, to be fair, if you want to remove it, telling it to install it
probably isn’t likely to produce the desired effect…

>
You could try using “remove” or “purge” instead but be ready for a lot of
applications that depend on Gnome (or at least on a desktop of some sort).
So the better approach may be to decide what you want to replace it with,
try to install that, and guide the package manager towards uninstalling
Gnome as it tries to sort out the mess that creates.

Mark

>
>
>


Re: netgear wna3100 not supported by linux?

2023-01-09 Thread Mark Fletcher
On Mon, 9 Jan 2023 at 09:46, lsg  wrote:

> i've searched Internet, it doesn't seem supported by linux?? too bad
>
> Looks like you need to use ndiswrapper with the windows drivers to get it
to work. Saw that on an Ubuntu forum but what’s good for the gander is
often good for the goose…

Mark


Re: Package versions in multi-arch

2023-01-06 Thread Mark Fletcher
On Fri, 6 Jan 2023 at 15:39, Steve McIntyre  wrote:

> You've been bitten by a subtle but unfortunately common problem,
> yes. In multi-arch systems the versions of packages have to be totally
> in sync. But the +b1 syntax here means that the i386 package has had a
> binNMU (binary NMU) build which means that they *can't* be in
> sync. The problem should be solved with the next new upload of the
> llvm-toolchain-15 source package - that will bump the version in both
> cases so that they match.
>
> binNMUs are horrible. :-(
>
Thanks very much. That explains it. Guess I will just wait then.

Thanks

Mark



Package versions in multi-arch

2023-01-06 Thread Mark Fletcher
Hi list

I have a package installation problem which leads to a question about
how (and if) package versions interact in different architectures.

My system is an amd64 bookworm system, with multi-arch support and
some packages from i386 installed, to support a vendor-supplied
printer driver and, more relevantly to this problem I think, steam.

For the last week or so package libllvm15:i386 doesn't want to upgrade
and is "kept back" by apt.

$ apt show -a liblllvm15:i386

Shows two possible package versions, 1:15.0.6-3 which is installed and
1:15.0.6-3+b1 which is a candidate to install. This situation pertains
only to the i386 version, the amd64 version is at 1:15.0.6-3 and
doesn't seem to have a newer version. This agrees with what I see on
packages.debian.org.

Looking at the dependencies of the two versions of the i386 package,
the only difference I can see is that the new version depends on
libstdc++6 version 2.12.0.10 or newer, while the older, installed
version depends on 2.12.0.09 or newer. However I have 2.12.0.13
installed so that should satisfy either version's dependency.

I also checked dependencies _on_ libllvm15:i386:

$ apt rdepends libllvm15:i386 | awk '/^ Depends/ {print $2}' | awk -F:
'{print $1}' > depends.txt
$ apt list --installed | grep i386 | grep -f depends.txt

This shows me 4 i386 packages are both installed and depend on
libllvm15:i386, looking at "apt show" of all 4 shows that they all
depend simply on the presence of the package and make no demands about
what version of the package is present.

The last thing I could think of was a Conflicts dependency:

$ apt list --installed | grep i386 | awk -F/ '{print $1}' | xargs apt
show -a | grep ^Conflicts: | grep libllvm15

returns nothing, indicating that none of the installed i386 packages
conflict with any version of libllvm15.

In desperation I dragged aptitude out of retirement and did aptitude
full-upgrade, and got a little bit more information. Aptitude wanted
to uninstall the amd64 version of libllvm15 and then upgrade
liblvm15:i386. I didn't let it because that would have broken the
packages that depend on the amd64 version of the package.

Am I reading this right that if two architectures of the same package
are installed on the same system with multi-arch support, they have to
be the same version? If that is true, the solution is just to wait for
the new version of the package to be added in amd64. But... really?

Thanks

Mark



Re: gnome-remote-desktop on bookworm

2022-12-21 Thread Mark Fletcher
On Tue, 20 Dec 2022 at 15:41, Mark Fletcher  wrote:
>
> Hi
>
> Does anyone know what state gnome-remote-desktop is in on bookworm? I can't 
> get it to work. I have a system recently upgraded to bookworm, running Gnome 
> if that wasn't obvious.
>

This turned out to be caused by port number confusion. Some of the
summary info about gnome-remote-desktop suggests it offers both VNC
and RDP protocols. Digging deeper into the docs it turns out that RDP
is offered by default and you have to dig deeper to turn on VNC. Port
5900 is the VNC port, the RDP port being 3389. gnome-remote-desktop
was listening on 3389, and when I downloaded an RDP as opposed to VNC
client on the client side and tried that, it worked.

Incidentally the gnome-remote-desktop docs say you have to use a
command line tool to turn on VNC support, but when you invoke said
tool, its help only offers any info about RDP options.. Still haven't
figured out how to turn on VNC support -- but that's OK because I have
RDP working now.

The other thing I have learned through this experience is that there
are a large number of extremely crap RDP client apps available for the
iPad -- I had to kiss a fair number of frogs before I found a decent
one. One of the frogs included a tool that claimed to be _made by
Microsoft_ but didn't map the screen properly, so where you were
clicking wasn't where you thought you were clicking. Never mind, I
found a decent one in the end.

Mark



gnome-remote-desktop on bookworm

2022-12-20 Thread Mark Fletcher
Hi

Does anyone know what state gnome-remote-desktop is in on bookworm? I can't
get it to work. I have a system recently upgraded to bookworm, running
Gnome if that wasn't obvious.

In Gnome settings, under Sharing I have turned on Remote Desktop and Remote
Control, but other clients on my network can't connect, saying the port
can't be contacted. Those machines can connect to the machine in question
in other ways, eg ssh.

I googled for this and found a discussion in an Ubuntu forum where they
used lsof to show that nothing was listening on the RDP port, 5900. The
cause on that occasion was held to be that Ubuntu was using a different
major version of gnome-control-center (41) to that of gnome-remote-desktop
(42). I've checked and in bookworm right now the major version of both is
43.

# lsof -i TCP:5900

returns nothing. Same for all port numbers between 5900 and 5909 (I checked
in case it was using a different virtual desktop number). That suggests to
me the remote desktop server is not listening, although the settings would
have me believe it is. I also tried a reboot although I realise this isn't
Windows but just in case... no dice.

Is anyone using this (relatively new, I gather) feature of Gnome? Did you
have to do anything else to get it to work?

Thanks all

Mark


Apt upgrade problem

2022-10-16 Thread Mark Fletcher
Hi

Tonight I am seeing a behaviour pattern in my Debian Bullseye system that I
have not seen before.

After "sudo apt update", the system informs me there is 1 package that can
be upgraded.

"sudo apt upgrade" reports nothing to do, 0 packages upgraded, 0 newly
installed, 0 to remove and 0 not upgraded...

"apt list --upgradable" shows a new version of the Amazon Workspaces
client, version 4.3.0.1766. It also shows that there is one more version
available.

"apt list -a --upgradable" shows:

Listing... Done
workspacesclient/unknown 4.3.0.1766 amd64 [upgradable from: 4.2.0.1665]
workspacesclient/now 4.2.0.1665 amd64 [installed,upgradable to: 4.3.0.1766]

"sudo apt install" reports:

Reading package lists... Done
Building dependency tree... Done
Reading state information... Done
0 upgraded, 0 newly installed, 0 to remove and 1 not upgraded.

System doesn't seem to want to install the new version of the Amazon
workspaces client. I'm assuming some dependency not known to the system is
needed for the new version. However I also note the /unknown after the
package name in the new version, which is /now in the current version. I am
not sure what that is, but presumably /unknown isn't good... Can anyone
suggest an approach to investigate why this upgrade won't happen?

In case important, Amazon workspaces client is included in my package list
by "amazon-workspaces-clients.list" in /etc/apt/sources.list.d/ :

deb [arch=amd64] https://d3nt0h4h6pmmc4.cloudfront.net/ubuntu bionic main

That comes from the install instructions page of the Amazon Workspaces
client. It did occur to me that perhaps the new version needs some
additional repository, so I went back and checked but the installation
instructions have not changed, so it seems not.

Thanks in advance

Mark


Re: OT: Router behaviour

2021-02-20 Thread Mark Fletcher
On Thu, Feb 04, 2021 at 08:23:39PM -0500, Stefan Monnier wrote:
> > I powered the router down again, plugged its WAN port into one of the 
> > LAN ports of the ISP-supplied router, and brought it back up.
> 
> I you sure you plugged your ISP-router into the WAN port of your
> (Buffalo) router and not into one of the LAN ports?
> 
> The behavior you describe would be easy to explain if it was plugged
> into a LAN port (or if the WAN port was somehow bridged with the LAN
> ports) since in that case you'd have basically a single network with
> packets forwarded between the two routers, and two DHCP servers, making
> it quite possible that a DHCP request received on your router ends up
> being answered by the ISP router instead (since the request is
> broadcasted to all connected machines).
> 

So thanks to everyone who replied to this thread, some really great 
links and suggestions which I am following up. One thing for the record, 
there is absolutely no possibility I connected the LAN port to LAN port 
-- the LAN port on the ISP's router is plugged into the WAN port of the 
old router. That has been checked, double checked and triple checked. 
Whatever is causing it, it isn't that.

There is a switch on the back of the old router, with 2 settings, "AP" 
and "WB". "AP" is obviously "Access Point". Not sure yet what "WB" 
stands for but I suspect some of the links you guys supplied will help 
me figure that out. The router is set to "AP" -- I _think_ it always has 
been, but I might be misremembering that and the switch could easily 
enough have been moved in the course of an international move or while 
lying around on my spare bedroom floor for the best part of a year.

Anyway -- thanks very much all for the input supplied, I have plenty 
reading to do! :)

Mark



OT: Router behaviour

2021-02-04 Thread Mark Fletcher
First apologies for the off-topic post, but I know this community is 
full of experts on this topic and my ask in the end is a simple one:

Can anyone point me at a reasonably accessible guide to the details of 
how IP networks work, in particular the communications that occur 
between router devices that are designed to support home networks? I'm 
computer science trained but from many years ago and if I ever learned 
these specific details I have forgotten them, but I feel equipped to 
understand them. I'm after a certain amount of detail and would prefer 
to avoid adverts or advice of the "just buy our product, plug it in and 
your problems will all be solved" type.

The background to my request is this:

A while ago I moved house (and countries) and since arriving in the new 
house I have been using a WiFi router provided by my broadband provider, 
somewhat reluctantly, but without a really serious alternative since the 
router also contains the ISP's modem. In the old place I used a 
store-bought router+WiFi device of very typical type (Buffalo brand, 
although I don't expect that to be relevant) plugged into the (cable) 
modem.

My kids have been complaining recently about the quality of the WiFi and 
so I thought I'd fire up the old router from the old house and see if 
it's any better. I experimentally fired it up without plugging it into 
anything and the old WiFi networks came up, I could connect to them, and 
got an IP address in the range I used to use at the old house (which is 
different from what I use now, for arbitrary reasons).

I powered the router down again, plugged its WAN port into one of the 
LAN ports of the ISP-supplied router, and brought it back up. It came up 
but seems to have automatically subordinated itself to the ISP-supplied 
router and is now offering up IP addresses in the range supplied by the 
ISP's router... It seems like it has automatically taken a subordinate 
role to the ISP's router. It is still offering up the old network names 
with the old password but when I connect I get an IP address in the 
range used by the ISP's router (said address works fine).

I don't think I expected it to do that, certainly not automatically, and 
before I decide if I am happy with this outcome or not I want to 
understand in detail what just happened and why, so I can understand its 
implications. Just as one example, I want to understand what the 
implications are for the store-bought router's firewall -- has it just 
been bypassed? and so on. Hence the request for a pointer to some good 
documentation. Book, website, whatever you think would be most helpful, 
I would much appreciate any pointers.

Thanks

Mark



Re: Archivemail

2021-01-23 Thread Mark Fletcher
On Sun, Jan 24, 2021 at 12:17:55AM +0100, Jochen Spieker wrote:
> Thank you for pointing out that archivemail will be gone soon. Since I
> am using it as well, I took a quick look at it ("how hard can it be??")
> and tried a quick conversion to Python3:
> https://github.com/solexx/archivemail
> 
> Luckily there is a test suite and only about two thirds of the tests are
> failing, otherwise I might have thought it should work. ;-)
> 
> The main problem is that one has to replace a few modules/functions,
> mostly the long-obsolete rfc822. I think I can get away with throwing
> away the get_filename function completely, which was a little
> problematic due to the dependencies. I guess I have only "solved" the
> easy problems for now, but I still think the conversion should be
> doable.

On digging around prompted by Brian's earlier input I found a 
conversation between a couple of other people and they seemed to start 
that process and then give up on the basis that it was a lot harder than 
they thought...

I'm probably going to just write a python3 script of my own to handle my 
use-case, which is just to sift through /var/mail/ to delete mails 
older than a month. I've found the python3 "mailbox" library which 
includes support for mbox type mail files, which is what I think 
/var/mail/ is. Looking at the API it doesn't look that hard, and I 
have to assume the complexity of porting archivemail comes either from 
the paradigm it operates in or from functionality it provides other than 
simply deleting old mail.

On my travels today I discovered mutt can do the job I want for me with 
a simple command, but I don't want to have to run mutt to do it -- I'm 
"away" for long periods and I want the mail box size to stay under 
control while I'm not paying attention, accepting that means I will 
never read some mail that goes to the mailing list. Hence I want an 
automated solution that happens whether I am paying attention or not.

> 
> I have no idea whether or when I will be able to finish this, because I
> am severly time-constrained due to the pandemic (daycare in the morning,
> regular work in the afternoon until late at night). I am also aware that
> it is probably too late for bullseye, but archivemail can be dropped to
> ~/bin/ easily.
> 
> But well, first me or somebody else has to fix those failing tests.
> 

Despite my decision to have a crack at writing something local for my 
own usecase, I do think it would be good if archivemail made a comeback, 
so I wish you well in that endeavour. I'd offer to help test but as I've 
illustrated, my usecase is very simple...

(still, happy to test that usecase if it would be helpful)

Mark



Re: Archivemail

2021-01-22 Thread Mark Fletcher
On Fri, Jan 22, 2021 at 11:33:13PM +, Brian wrote:
> On Fri 22 Jan 2021 at 23:22:19 +0000, Mark Fletcher wrote:
> 
> > Anyone know what happened to archivemail in Debian?
> > 
> > packages.debian.org shows it was in Jessie, Stretch and Buster, but it 
> > is in neither sid nor Bullseye which presumably means it has been 
> > dropped. Anyone know why?
> > 
> > bugs.debian.org doesn't show any bugs against it and google is coming up 
> > short too.
> 
> It's always a good idea to look at "Developer Information" on a
> package page.
> 
> Python2? Python3?
> 

OK deciphering the above for anyone who might follow after me: 
archivemail is dependent on Python2 which, it appears, is getting 
removed from bullseye (although as of right now it's not gone yet, it's 
installed on my system and I didn't ask for it, so something is clearly 
still dependent on it)

So what alternatives would people recommend? My use case is for deleting 
messages to a local copy of this mailing list after they're a month old, 
which I have previously pulled down to my local user ID from my gmail 
account with fetchmail.

Mark



Archivemail

2021-01-22 Thread Mark Fletcher
Anyone know what happened to archivemail in Debian?

packages.debian.org shows it was in Jessie, Stretch and Buster, but it 
is in neither sid nor Bullseye which presumably means it has been 
dropped. Anyone know why?

bugs.debian.org doesn't show any bugs against it and google is coming up 
short too.

Thanks

Mark



Gnome Terminal in Bullseye

2021-01-22 Thread Mark Fletcher
Hello

Has anyone else noticed in Gnome Terminal in bullseye that, by default, 
it starts in an 80x24 configuration, but if you press F11 to make it 
full screen and then press F11 again to take it back to non-full-screen, 
the configuration it goes back to isn't quite 80x24? It's approx 79x23 
or 22, I would say.

Is there configuration somewhere that controls this? And if there is, is 
there any reason why we would want it configured in Debian not to return 
to the size it came from after being in full-screen mode?

I'm wondering if this is a simple configuration issue, a bug in the 
Debian package, or an upstream bug.

I don't know if this is long-standing behaviour as I have been using 
XTerm for the last few releases due to a different Gnome Terminal bug 
which annoyed me, which seems to be fixed now in bullseye.

Thanks

Mark



Re: Migrating LVM volumes to a new machine

2020-12-12 Thread Mark Fletcher
On Fri, Dec 11, 2020 at 06:53:59PM -0500, Michael Stone wrote:
> On Fri, Dec 11, 2020 at 11:46:46PM +0000, Mark Fletcher wrote:
> > I feel
> > like I can't follow the instructions in the HOWTO because it wants me to
> > unmount the file systems, export the LVs and so on, on the old machine
> > before moving the disk, and I don't see how I can do that on a system
> > that is expecting to use those file systems to operate.
> 
> you must have missed:
> "vgexport/vgimport is not necessary to move drives from one system to
> another. It is an administrative policy tool to prevent access to volumes in
> the time it takes to move them."

Yeah... didn't miss that, but thought "that can't mean what it appears 
to mean, because if it did the whole article would be contracticting 
itself". So I guess "failed to understand" would be closer to the truth 
than "missed".

> 
> Just move the disk and the lvm stuff will show up like normal. The only
> potential gotcha is if you used the same volume group names on both systems,
> then that would need to be resolved (by renaming one of them)
> 

Oh. Good. Thanks! Yes I had considered that different VG names would be 
needed. I just let the installer do its thing with the naming last time, 
guess when I install this time on the new machine I will intervene.

That generates a followup question, out of curiosity. Presumably for 
that to work, all the info needed for the computer to learn about the VG 
at boot must be stored on the PV. What happens when there is more than 
one PV for a VG? Is the info stored on all of them, or just one?

Thanks

Mark



Migrating LVM volumes to a new machine

2020-12-11 Thread Mark Fletcher
Hello

I would like to understand how to move a LVM VG from one machine to 
another, when the disk to be moved contains filesystems key to the 
source system. I have read section 13.6 of the LVM HOWTO which talks 
about moving VGs. However the context of my situation is I am 
cannibalising an old machine and moving the VG that contains the LVs 
mounted on /, swap and /home of the source machine. That, apart from the 
non-LVM /boot partition, is all that is on this SSD.

The disk will not be the primary disk on the new machine, and the new 
machine's /, swap and /home will be elsewhere, but I will want to be 
able to access the current contents of the disk from the old machine on 
the new machine for a period of time before I decide to wipe it. I feel 
like I can't follow the instructions in the HOWTO because it wants me to 
unmount the file systems, export the LVs and so on, on the old machine 
before moving the disk, and I don't see how I can do that on a system 
that is expecting to use those file systems to operate.

I thought of booting a live system from a USB stick, so the affected LVs 
aren't mounted or in use, but then I'll face the problem of how to get 
the live system to be aware of the LVM setup existing on the SSD.

I guess the bottom line is I am missing some understanding of how LVM 
works in detail, and in particular how a machine that didn't create a 
particular VG and its contents can nonetheless be made to recognise it. 
Can anyone fill me in?

Thanks 

Mark



Re: Permissions on NFS mounts

2020-12-10 Thread Mark Fletcher
On Wed, Dec 09, 2020 at 03:54:10PM -0500, Dan Ritter wrote:
> Paul M Foster wrote: 
> > I have two users on the client: paulf 1000 and nancyf 1001. On the
> > server, I have two users: pi 1000 and paulf 1001. I can mount the NFS
> > share from the server to /mnt on my client. But any files belonging to
> > me (user 1001 on the server) look like they belong to nancy (user 1001
> > on the client. More importantly, if I copy files to this share from the
> > client, they will look like they belong to pi (user 1000) on the server.
> > 
> > Is there some way in the /etc/exports file to adjust the parameters so
> > that files retain my ownership on the server?
> 
> You're looking for userid mapping, handled by idmapd.
> 
> Your best long-term solution is to make the userids consistent
> across machines by making a decision about who will be 1000, 
> 1001 and 1002, and then changing /etc/passwd and running
> suitable "chown -R" commands, probably followed by find
> commands.
> 
> Debian automatically starts user numbering at 1000, so it's a
> good idea to have a consistent install username, if you can
> arrange it.
> 


This brings up an interesting thought. In the situation where you align 
user IDs across a number of machines for ths purpose, you'll inevitably 
end up with situations where users are created on some of the machines 
only for the purpose of keeping the IDs in synch so they can all play 
nice with the NFS. Left alone, having unneeded users on a given machine 
could be a security threat, at least in the sense that it provides a 
greater than necessary attackable surface area. What can be done about 
that? Obviously one thing would be setting the shell to /dev/null in the 
password file of those machines that don't need a given user, to prevent 
interactive logins. What else could be done? Is there a way to put an 
account "beyond use", in any way including su, sudo etc, while still 
having the machine recognise the user for being a user and therefore not 
messing up the mapping of user IDs on shared resources like NFS? In 
other words, create the sense of "yes this user exists, but they are not 
welcome here"?

Mark



Re: Two questions as I prepare for a new install

2020-12-09 Thread Mark Fletcher
On Mon, Dec 07, 2020 at 06:06:43PM -0700, Charles Curley wrote:
> On Tue, 8 Dec 2020 00:00:54 +
> Mark Fletcher  wrote:
> 
> > 1. Does anyone have any advice (or a link to offcial advice)
> > regarding whether a new bullseye install is better done with the
> > testing installer at this time, or by first installing buster and
> > then upgrading?
> 
> In general, you are better off installing new rather than upgrading.
> Installing new means less Buster cruft on your system compared to
> upgrading buster. Upgrading is a PITA. Why install and then upgrade
> when installing will get you what you want?
> 

Thanks, great to know -- but just for the record that didn't use to be 
the advice -- I'm sure a search through the archives of this list will 
show times when people advised that the way to install testing was to 
install stable and then upgrade. That sounded like a faff, for exactly 
the reasons you mentioned, hence why I asked -- was hoping I'd get the 
answer you gave!

Anyone have any thoughts on the second question I asked?

Thanks

Mark



Two questions as I prepare for a new install

2020-12-07 Thread Mark Fletcher
Hello list

I am currently amassing the hardware for a new PC build as a Christmas 
present to myself, and plan to install Bullseye on it when the hardware 
is all here.

My current system runs Buster and I thought it would be interesting to 
see what's coming.

I have two questions:

1. Does anyone have any advice (or a link to offcial advice) regarding 
whether a new bullseye install is better done with the testing installer 
at this time, or by first installing buster and then upgrading?

2. My new graphics card is a ASUS-branded nVidia GeForce RTX-2060, which 
means I get to move back to the core non-legacy nVidia driver instead of 
the legacy-legacy one I have been forced to use for some years on my old 
system due to the hardware being 11 years old. Yay... EXCEPT I have read 
hints there is a problem with the up-to-date nVidia driver and recent 
kernels. Does that affect Bullseye? Are there caveats or workarounds I 
should be aware of?

Either direct answers or links to places that answer these would be 
appreciated -- I've been a bit out of the loop recently and suspect I've 
missed a few developments.

Thanks

Mark



Re: VPN ideas

2020-12-07 Thread Mark Fletcher
On Mon, Dec 07, 2020 at 04:35:09PM -0500, Roberto C. Sánchez wrote:
> On Mon, Dec 07, 2020 at 11:27:25PM +0200, ellanios82 wrote:
> >  Hi List   :)
> > 
> > 
> >  - any suggestions please , for a handy VPN for everyday use : no specific
> > purpose, but only to add a little more privacy ??
> > 
> >  - and , is this a reasonable idea ?
> > 
> It is difficult to know since you don't specify any actual requirements,
> but OpenVPN or WireGuard should be suitable for most uses.
> 
+1 for OpenVPN. I've used it for some years and love it.

Some time ago I also used HMA (stands for "Hide My A$$" I believe), as 
something I could use across Android devices and Linux. It also did the 
job and let me pretend I was in a different country.

Mark



libXp -- was there a better way?

2020-10-23 Thread Mark Fletcher
Hello

I am running Buster on c2009 amd64 hardware -- one of the earliest Intel 
Core i7s. This was a clean install of Buster done a little over a year 
ago. Previously I had run many older flavours of Debian on this hardware 
over the years.

I occasionally use a specialist piece of software called xephem, which 
is old but doesn't to my knowledge have a newer replacement that's 1% as 
good. I tried to fire it up the other night for the first time since I 
installed buster. It refused to run because libXp.so.6 was missing. A 
bit of googling showed me that this is an old, deprecated library for 
printing in X. I couldn't run the execuable of xephem I had previously 
built and I couldn't build the latest version because of its expectation 
to find the include  which is provided by the same 
library... ("latest" version isn't very new...)

libXp.so.6 was last in Debian in Jessie, in package libxp6. Looking at 
the dependencies of libxp6 in Jessie, they were all installed on my 
system (obviously newer versions) except multiarch-support. So I 
downloaded the package from Jessie and used gdebi to install it on my 
system. This worked, and now xephem runs.

To avoid trouble when I next upgrade I propose from here to create a 
dummy package for xephem using equivs to register the dependency on 
libxp6, and then mark libxp6 as automatically installed, so the package 
manager in a future upgrade can figure out it can remove xephem's dummy 
package and thereby get rid of libxp6 if it causes conflicts. I have no 
idea if xephem will now be able to print, but I don't care as I don't 
want to use its printing functionality, I only did any of this because 
the missing library was preventing it from starting.

My question is, was there a better way to resolve this dependency? And, 
in a Buster system which has been installed not upgraded, am I in danger 
of creating trouble for myself by having this old package on my system?

Thanks

Mark



Re: R performance

2020-05-13 Thread Mark Fletcher
On Tue, May 12, 2020 at 12:06:52PM -0500, Nicholas Geovanis wrote:
> 
> You don't mention which distro you are running on the EC2 instance, nor
> whether R or the C libraries differ in release levels. Moreover, that EC2
> instance type is AMD-based not Intel. So if not an apples-to-oranges
> comparison, it might be fujis-to-mcintoshs.

The distro on EC2 was Amazon Linux -- the current Amazon Linux build 
they offer that isn't Amazon Linux 2. Sorry I don't remember the exact 
build number. I did also try Amazon Linux 2 and got similar results, 
tainted by the fact that I had a little bit of trouble on THAT occasion 
building the tidyverse libraries in R, and may possibly have ended up 
with a not-entirely-clean install as a result on that one occasion. So 
best to ignore that and concentrate on the Amazon Linux (not Amazon 
Linux 2) attempts, of whcih I had a couple, which were consistent.

Locally I am running Buster.

On EC2 I commissioned the fresh machine, installed R from the Amazon Linux 
repositories, which by the way is 3.4.1 "Single Candle" -- not quite 
what is in the Debian repositories but different by a minor version.

EC2 used to offer Debian but they don't any more. The closest I could 
get would be Ubuntu.

But we are talking about the same R code and same data running with a 
13-fold performance difference -- I don't believe that is down to the 
Linux distro per se, or AMD vs Intel. Something else is going on here. 
The EC2 box has 128GB of RAM and I could see the R instance using about 
1.3GB, which is what it does on my local box too (24GB RAM here).

I do get that we are talking about virtualised CPUs on the EC2 box and 
virtualisation introduces a penalty of some sort -- but 13-fold? 
Compared to 10-year-old physical hardware? Sounds wrong to me. As I say, 
something else is going on here. Especially when one considers, as I 
mentioned before, that past experiments with Java have not seen a 
significant performance difference between EC2 and my environment.

> 
> Long ago I built R from source a couple times a year. It has an
> unfathomable number of libraries and switches, any one of them could have a
> decisive effect on performamce. Two different builds could be quite
> different in behavior.
> 

Right -- that's what prompted my original question. I was/am hoping 
someone might be in a position to say "well it could be the fact that we 
set the XYZ flags in the build of R in Debian..." that would give me a 
rabbit hole to chase off down.

D.R. Evans' helpful point about his experiences with multi-CPU usage in 
recent Debian builds of R, for example -- even though that's not what 
I'm seeing in my runs, it does imply thought has gone into optimal CPU 
usage in the Debian R build...

Overnight I've run the job on my own machine now, by splitting the job 
up into 10 pieces and running 2 parts each in 5 parallel batches -- I 
was loath to do that at first as the machine is old and self-built and I 
worried about overheating it, but me of little faith, it handled it 
fine. So the question has become academic but I would like to get some 
sort of explanation so I can adjust for the future.

Mark



Re: R performance

2020-05-12 Thread Mark Fletcher
On Tue, May 12, 2020 at 08:16:52AM -0600, D. R. Evans wrote:
> Mark Fletcher wrote on 5/12/20 7:34 AM:
> > Hello
> > 
> 
> I have noticed that recent versions of R supplied by debian are using all the
> available cores instead of just one. I don't know whether that's a debian
> change or an R change, but it certainly makes things much faster (one of my
> major complaints about R was that it seemed to be single threaded, so I'm very
> glad that, for whatever reason, that's no longer the case).
> 
Thanks, but definitely not the case here. When running on my own 
machine, top shows the process at 100% CPU, the load on the machine 
heading for 1.0, and the Gnome system monitor shows one CPU vCore 
(hyperthread, whatever) at 100% and the other 7 idle.

R is certainly _capable_ of using more of the CPU than that, but you 
have to load libraries eg snow and use their function calls to do it -- in 
short, like in many languages, you have to code for parallelism. I tried 
to keep parallelism out of this experiment on both machines being 
compared.

Mark



R performance

2020-05-12 Thread Mark Fletcher
Hello

I have recently had cause to compare performance of running the R 
language on my 10+-year-old PC running Buster (Intel Core i7-920 CPU) 
and in the cloud on AWS. I got a surprising result, and I am wondering 
if the R packages on Debian have been built with any flags that account 
for the difference.

My PC was a mean machine when it was built, but that was in 2009. I'd 
expect it would be outperformed by up to date hardware.

I have a script in R which I wrote which performs a moderately involved 
calculation column-by-column on a 4000-row, 1-column matrix. On my 
Buster PC, performing the calculation on a single column takes 9.5 
seconds. The code does not use any multi-cpu capabilities so it uses 
just one of the 8 avaialable virtual CPUs in my PC while doing so. (4 
cores, with hyperthreading = 8 virtual CPUs)

Running the same code on the same data on a fairly high-spec AWS EC2 
server in the cloud, (the r5a-4xlarge variety for those who know about 
AWS) the same calculation takes 2 minutes and 6 seconds. 

Obviously there is virtualisation involved here, but at low load with 
just one instance running and the machine not being asked to do anything 
else I would have expected the AWS machine to be much closer to local 
performance if not better, given the age of my PC.

In the past I have run highly parallel Java programs in the two 
environments and have seen much better results from using AWS in 
Java-land. That led me to wonder if it is something about how R is 
configured. I am not getting anywhere in the AWS forums (unless you pay 
a lot of money you basically don't get a lot of attention) so I was 
wondering if anyone was familiar with how the R packages are configured 
in Debian who might know if anything has been done to optimise 
performance, that might explain why it is so much faster in Debian? Is 
it purely local hardware versus virtualised? I am struggling to believe 
that because I don't see the same phenomenon in Java programs.

Thanks for any ideas

Mark



iPhone as bluetooth audio source for buster

2020-03-16 Thread Mark Fletcher
Hello

Recently I wanted to connect my iPhone 7 to a new Buster install in the 
same way I had many years before with an earlier iPhone and earlier 
Debian, so I could play music from it through my speakers.

Bluetooth setup on the Debian machine is basically working; I can 
connect to a variety of devices and can play audio from the PC through 
bluetooth headphones etc.

Bluetooth setup on the phone also seems to be fine as it can connect to 
and use bluetooth headphones, speaker, my car etc.

I have successfully paired and connected my phone and my Debian machine, 
but the Debian machine doesn't seem to recognise the phone as an audio 
source. Once paired and connected, my GNOME seems ready to use a network 
through the phone's bluetooth, but if I start to play audio on the 
phone, I get silence -- which tells me the phone expects it to work as 
it isn't using its speaker, but something isn't right at the computer 
end.

Instructions on the internet are, I suspect, out of date, referring to 
older versions of bluez. In particular I see references to adding text 
to /etc/bluetooth/audio.conf, but that file does not exist on my system 
and according to apt-file there is no package in Buster that would put 
it there.

Nonetheless following the possibly-antiquated instructions, after 
connecting the phone to the computer I try to create a loopback device 
for pulseaudio with:

$ pactl load-module module-loopback \
source=bluez_source.XX_XX_XX_XX_XX_XX \
sink=alsa_output.pci-_00_1b.0.analog-stereo



which gives me the response:

Failure: Module initialization failed

Anyone have any advice what I should check to find what's missing?

Thanks in advance

Mark



Re: Hard disks auto-spinning-down

2019-10-28 Thread Mark Fletcher
On Mon, Oct 28, 2019 at 07:44:48PM +0200, Andrei POPESCU wrote:
> On Ma, 01 oct 19, 15:49:57, Alex Mestiashvili wrote:
> > 
> > You may want to try hd-idle, it is not yet available in stable, but one
> > can install it from testing (it is not advisable in general, but the
> > divergence between buster and testing is not that big right now)
> > wget it from
> > http://ftp.de.debian.org/debian/pool/main/h/hd-idle/hd-idle_1.05+ds-2_amd64.deb
> > or any other Debian mirror and edit /etc/default/hd-idle in order to
> > start the daemon.
> > See man hd-idle for the details.
> 
> One could also write to debian-backports, CC: the maintainer and ask 
> nicely for a backport ;)
> 
Thanks for all this help, guys. Does anyone have any thoughts on why one 
generation of an external disk cage wouldn't require this and just spun 
down the disks automatically when idle, but the new one does require 
incantations to do so? Bearing in mind that a Mac-using friend of mine 
reports the same (new) model of cage does spin down the disks when 
connected to a Mac without him having to have made any settings, so the 
cage isn't against spinning down the disks or anything weird... There's 
no reference at all to spinning down the disks in the cage's manual, but 
there wasn't in the old generation's manual either.

Thanks

Mark



Re: RStudio in Buster

2019-10-02 Thread Mark Fletcher
On Tue, Oct 01, 2019 at 06:27:48AM +0300, Reco wrote:
>   Hi.
> 
> On Mon, Sep 30, 2019 at 11:45:41PM +0100, Mark Fletcher wrote:
> > On Mon, Sep 30, 2019 at 09:28:05AM +0300, Reco wrote:
> > > On Sun, Sep 29, 2019 at 10:34:17PM +0100, Mark Fletcher wrote:
> > > > The most recent package they provide is aiming at Stretch -- they don't 
> > > > seem to have produced a Buster version yet.
> > > 
> > > It says otherwise here [1]:
> > > 
> > > Studio 1.2.5001 - Ubuntu 18/Debian 10 (64-bit)
> > > 
> > > and here [2]:
> > > 
> > > Supported branches:
> > >   Debian buster (stable)
> > 
> > OK it seems they think they've made a buster compatible version, but 
> > they evidently didn't test it very well, since the fact remains the 
> > program refuses to run without libssl1.0.2, which renders it unable to 
> > work in pure buster.
> 
> Installed this package along with r-base-core.
> Launched /usr/lib/rstudio/bin/rstudio.
> Successfully run demo().
> 
> Stock buster 10.1, without libssl1.0.0 or libssl1.0.2.
> 
> In short, seems to work for me.
> 
> Reco
> 

Likewise, stock buster here, I did have libssl1.1 installed. Launched by 
Gnome app search or by running rstudio from command line and letting 
bash find it. The package installs a link in /usr/bin pointing at 
/usr/lib/rstudio/bin/rtsudio so we are running the same thing.

Curiouser and curiouser.

Also your statement saying no libssl installed confuses me, since the 
package is dependent on libssl1.0.0 or libssl1.0.2 or libssl1.1 as you 
pointed out, so surely installing it would install one of those 
libraries (in buster, 1.1)?

Mark



Re: Hard disks auto-spinning-down

2019-09-30 Thread Mark Fletcher
On Mon, Sep 30, 2019 at 01:41:37AM -0700, B wrote:
> 
> 
> On 9/29/19 4:30 AM, Mark Fletcher wrote:
> > Any thoughts on where I might look to find settings that can be tweaked
> > to make it spin down when idle?
> 
> 
> See sdparm and hdparm tools. hdparm is probably the wrong tool because it's
> for internal drives connected to IDE/ATA/SATA busses. The reason sdparm
> works for USB drives is because of SCSI-over-USB emulation. See man page for
> more info.
> 
> The best way to do it is with a udev rule that will run some commands when
> the USB device gets plugged in. Otherwise the device resets it's config each
> time you plug it in.
> 

Thanks a lot for the suggestions and the details. I will try this later. 
Much appreciated.

Do you (or anyone else) have any thoughts on why this is necessary in 
Buster but wasn't in Stretch? Or is it more likely to be that the new 
cage isn't triggering a udev rule the old one was or something?

Thanks again for your help

Mark



Re: RStudio in Buster

2019-09-30 Thread Mark Fletcher
On Mon, Sep 30, 2019 at 09:28:05AM +0300, Reco wrote:
>   Hi.
> 
> On Sun, Sep 29, 2019 at 10:34:17PM +0100, Mark Fletcher wrote:
> > The most recent package they provide is aiming at Stretch -- they don't 
> > seem to have produced a Buster version yet.
> 
> It says otherwise here [1]:
> 
> Studio 1.2.5001 - Ubuntu 18/Debian 10 (64-bit)
> 
> and here [2]:
> 
> Supported branches:
>   Debian buster (stable)
> 

OK it seems they think they've made a buster compatible version, but 
they evidently didn't test it very well, since the fact remains the 
program refuses to run without libssl1.0.2, which renders it unable to 
work in pure buster.


> > The Stretch-facing package installs into Buster without error, but then 
> > fails when you try to launch it because it has a dependency on 
> > libssl1.0.2 and Buster uses libssl1.1 (and presumably this dependency 
> > isn't recorded at the package level)
> 
> Again, dpkg disagrees with you:
> 
> $ dpkg -I /tmp/rstudio-1.2.5001-amd64.deb  | grep Dep
>  Depends: libedit2, libssl1.0.0 | libssl1.0.2 | libssl1.1, libclang-dev, 
> libxkbcommon-x11-0,  libc6 (>= 2.7)

Nope, that is perfect agreement with me, not disagreement. The package 
says it needs libssl1.0.0 OR libssl1.0.2 OR libssl1.1, which allows 
buster to install it without dependency problems, using libssl1.1 to 
fulfil the dependency, but then as I said the program refuses to run if 
libssl1.0.2 is not installed, even if libssl1.1 is present. So that 
dependency info in the package is incorrect. The app actually stops with 
an error message on launch saying words to the effect of "I couldn't 
find libssl1.0.2". It then tries, presumably as a fallback, to find a 
particular version of libcrypto (I forget what version precisely) and in 
buster fails at that too. Installing libssl1.0.2 resolves the problem 
and lets the app start.

> I suggest you to update your RStudio package and to forget about
> libssl1.0.

The version I downloaded is 1.2.5001, which is the latest version for 
Debian and is the version you were looking at too. As we've established 
here, the package's dependency info and the real world dependencies of 
the application are not in synch, making it APPEAR like it should work 
in buster but it does not actually work in buster. When I googled this 
problem the only solution I found was from someone else who felt, like 
me, that while installing libssl1.0.2 works, it sucks and could be 
introducing who-knows-what potential future subtle problems. Hence my 
question (helpfully answered by deloptes) about if there is a safe way 
to do that.

Mark



RStudio in Buster

2019-09-29 Thread Mark Fletcher
Hello

The RStudio application, a popular IDE-like tool for programming in R, 
is not to my knowledge packaged in the mainstream Debian repositories. 
The makers of RStudio, however, provide a package which can be 
downloaded from their website for installation in Debian.

The most recent package they provide is aiming at Stretch -- they don't 
seem to have produced a Buster version yet.

The Stretch-facing package installs into Buster without error, but then 
fails when you try to launch it because it has a dependency on 
libssl1.0.2 and Buster uses libssl1.1 (and presumably this dependency 
isn't recorded at the package level)

If one downloads libssl1.0.2 from the Debian package pool and installs 
it, it appears to install OK and RStudio starts working -- but, what 
damage / compromise is that likely to have done to the system? Is it OK 
to do this? Should one take other steps to prevent libssl1.0.2 being 
used by other applications?

Thanks

Mark



Re: Rescuing hard disks

2019-09-29 Thread Mark Fletcher
On Fri, Sep 27, 2019 at 01:41:58PM +0100, Jonathan Dowland wrote:
> On Mon, Sep 23, 2019 at 03:11:07PM +0500, Alexander V. Makartsev wrote:
> > If I understood this right, you have two disks with data and they were
> > previously configured as RAID1 volume.
> > What make\model RAID-controller do you use? Because "cages" by
> > themselves offer only SATA\SAS ports for disks to connect them.
> > RAID functionality is provided by OS (software RAID), or RAID-controller
> > (hardware RAID).
> 
> This is indeed, key to providing a useful answer. Reading between the lines,
> I suspect OP is using Hardware RAID, and most likely, they've lost their
> data.
> 
Nope, as mentioned earlier in the thread, very glad to report no such 
loss occurred. The first advice in this thread was right, and recreating 
the partition table destroyed by the hardware RAID in the cage fixed it. 
The filesystem in the partition hadn't been touched.

Mark



Hard disks auto-spinning-down

2019-09-29 Thread Mark Fletcher
Since a fresh install of buster, an external USB3 hard disk cage from 
Terramaster that I own is not automatically spinning down the disks in 
it when they go unused for a time.

I used a previous generation of the cage with Stretch previously, it 
spun down the disks when they were not in use (actually a little too 
quickly for my taste) reliably and I don't recall doing anything to make 
that happen.

Any thoughts on where I might look to find settings that can be tweaked 
to make it spin down when idle? A friend of mine uses the same cage with 
a Mac and says it spins down when not in use, so I feel like I should be 
able to do it somehow.

The only thing I can think of that I've tweaked since installing buster 
is to disable suspend, as I didn't want the whole computer suspending 
when I was away for a bit, as this computer does a lot of background 
processing. I wonder if I overdid that and disabled something I should 
have left enabled? The solution to that problem involved disabling a 
couple of systemd targets.

I just re-googled that problem because I couldn't remember what targets 
I disabled -- and now I see on the wiki that they are sleep.target 
hibernate.target suspend.target and hybrid-sleep.target. The wiki says 
to do "systemctl mask" on those targets but I suspect I followed someone 
else's advice and did "systemctl disable" on those targets.

Any link to this problem? Otherwise where should I look?

Thanks

Mark



Upgrade Adventure Stretch-to-Buster

2019-09-24 Thread Mark Fletcher
Hi list

This long email is just a report on my recent stretch to buster upgrade 
experience. I had a bit of an adventure, didn't handle some steps well, 
and thought the experience would be useful to put out there for others 
to learn from / avoid some mistakes I made.

THERE IS NO QUESTION / PROBLEM TO SOLVE AT THE END OF THIS -- it's just 
here for info and in the hope it will help someone. If you don't like 
long emails, please know that I couldn't care less.

I was upgrading a Stretch desktop system to Buster. I've moved most of 
my most critical activities into the cloud and so my machine isn't 
nearly as mission-critical as it once was, but outages would still be 
annoying enough that I would want ideally to avoid significant downtime.

I have other users on this machine, who occasionally access my machine 
remotely via a VPN, but they are very much minority users and I am the 
most common user on my machine by a long way.

I perform 5-days-a-week backups using Amanda. The machine in question 
here happens to be the Amanda server as well as a client. The amanda 
virtual tapes and holding disk are in a USB3 disk cage. The cage also 
contains a RAID1 disk pair which contains data of various types.

Internally my machine has 2 1TB SSDs, one of which cotains most of my 
filesystem and the other is mounted at /opt. /opt contains a fairly 
extensive mariadb server, a SVN repository, and hard disk space for 2 
Windows VMs.

The upgrade was expected to have a large effect on the disk containing / 
and minimal effect on the disk containing /opt.

I verified before starting that my most recent backups had been 
successful. I disabled the systemd timer job I'd created to run the 
backups each night so that if I did have a messed-up system 
half-upgraded during the process, it didn't corrupt my backups. However 
I wasn't super confident that I could restore from my backups 
willy-nilly as I had had only limited occasion to do so before (turns 
out I didn't need to worry about this, I ended up restoring liberally 
from my backups and it went fine).

Before starting I additionally rsync'd my entire home directory and 
select parts of /etc (mysql / mariadb config, hard-won exim4 config, 
openvpn config etc) to spare disk capacity in the drive cage ie 
off-machine, just in case.

I MISSED my SVN config in this backup, incidentally.I had an automatic 
nightly backup of it but I did not hand-crank an additional backup of it 
before starting, because I forgot about it.

I carefully read the upgrade instructions on the Debian website. I 
noticed that I needed to migrate to ostensibly-predictable network 
interface names before doing the upgrade, so I duly executed the steps 
prescribed for that. That worked fine and internet access was none the 
worse for it afterward, before upgrading.

I noticed and didn't completely understand some warnings about ssh and, 
I think, old types of keys. I didn't do anything about that because I 
didn't really understand what that was saying. Sometimes I think some of 
these notes could do with a "What This Means In Plain English" 
section... Anyway that didn't seem to have any problems in the end.

I went through my sources.list.d directory and peeled out a couple of 
things I didn't expect to do in the upgrade eg Virtualbox's repository, 
the Google Talk plugin (gawd alone knows where that came from), and 
Christian Marillat's deb-multimedia repository. I thought I should do 
the upgrade without them and hoped to wean myself off deb-multimedia 
while I was at it. This, as it turned out, was a huge mistake.

After all this I modified my sources.list according to the upgrade 
instructions and apt update'd.

Next, I did an apt upgrade to upgrade some stuff before the main apt 
dist-upgrade, again following the upgrade instructions. This was the 
last command my system executed as a healthy system...

Obviously there was a lot to download and so on, so I let it get on with 
it. Eventually, after a lot of downloading and quite a lot of package 
upgrading, the process failed with an error. The issue was gstreamer and 
its library, which had hit a problem trying to upgrade because they were 
trying to overwrite a file also contained in the gstreamer-plugins-bad 
package.

This turned out to be a deb-multimedia problem. I had hoped that 
upgrading to Buster versions of packages would supercede the 
deb-multimedia repository package versions, but they didn't.

Now, in this situation, it didn't seem to be possible to use apt to go 
forward or back. If I tried to repeat the upgrade, install anything new, 
remove anything, or anything else, it would immediately stop saying 
there are broken packages, fix them with apt --fix-broken install. And 
if I did apt --fix-broken install, it would complain about the file 
write conflict I highlighted above.

At this point I tried to use the instructions in the upgrade 
instructions for handling this error message, and I don't think I did it 
right because 

Re: Problems with Buster and Bluetooth

2019-09-24 Thread Mark Fletcher
On Tue, Sep 24, 2019 at 01:21:03PM -0400, David Parker wrote:
> Ok, I think I may have solved the connectivity issue.  Some additional
> Googling revealed that GDM starts an instance of PulseAudio, and that
> conflicts with the PulseAudio server used by the Bluetooth device.  The
> steps to stop GDM from starting PulseAudio can be found online, and I've
> adapted them for Buster here:
> 
> (as root):
> echo "autospawn = no" >> /var/lib/gdm3/.config/pulse/client.conf
> echo "daemon-binary = /bin/true" >> /var/lib/gdm3/.config/pulse/client.conf
> su - Debian-gdm -c "mkdir -p /var/lib/gdm3/.config/systemd/user"
> su - Debian-gdm -c "ln -s /dev/null
> /var/lib/gdm/.config/systemd/user/pulseaudio.socket"
> 

Ah -- that old chestnut! I had that problem back in the day (wheezy or 
stretch, I don't remember which) with a pair of high-end bluetooth 
headphones.

I recently fresh-installed Buster, I've used Bluetooth with a speaker 
but not with my headphones without problems, but I just checked and I 
_do_ have 2 instances of pulseaudio running, one as my regular user and 
one as Debian-. Isn't that a bug in Debian's setup? Is there 
some reason one would want things that way?

Mark



Re: graphics card recommendation

2019-09-23 Thread Mark Fletcher
On Mon, Sep 23, 2019 at 08:09:12PM +0200, email.list...@gmail.com wrote:
> Hi!
> 
> I'm building a new machine and I'm looking at graphics cards right now. I
> want a card that can handle 4k for regular productivity tasks (if it can
> handle gaming at those resolutions doesn't matter), that is as silent as
> possible and has good support in debian. Right now I'm considering a Radeon
> RX 5700. Does anyone have any experience with this card or a recommendation
> for another card?
> 
> 
I've heard a lot of people cursing about Radeon graphics card support in 
Linux over the years, but my information may be out of date...

Mark



Re: Rescuing hard disks

2019-09-23 Thread Mark Fletcher
On Mon, Sep 23, 2019 at 03:07:15AM -, Debian Buster wrote:
> 
> Posible Options:
> 1. if you use lilo, look for a copy of parttions table.
> 2. create the parttion exactly as it was.

I'm running GRUB not lilo -- used lilo back in the 90's but switched to 
grub whenever Debian started prescribing it (or more accurately when I 
noticed). That's probably around Woody or so -- I was away for a while.

Option 2 above nailed it -- using cfdisk to create a partition filling 
the whole disk caused cfdisk to immediately recognise the installed file 
system. I recognise that was only that easy to do because I knew that 
the original partition arrangement was so simple. If it had been more 
complex I would likely have been hosed.

Anyway, dodged a bullet there -- thanks a lot for your help!

Mark



Rescuing hard disks

2019-09-22 Thread Mark Fletcher
Hello

While setting up a newly purchased RAID-capable hard disk cage I've 
damaged the contents of 2 hard disks and want to know if it is possible 
to recover.

The cage has 5 disk slots each occupied by 3TB hard disks. 4 of the 
disks came from an older cage by the same maker (TerraMaster, in case it 
matters) and one is new.

In the old configuration I had 2 disks in a RAID 1 configuration and 2 
as single disks. I transferred over the 4 disks from the old cage and 
added a new disk in the 5th slot.

The new cage is RAID 1 capable in its first two slots and the remaining 
three are single disks.

You've probably guessed what I did by now. I put the two single disks 
from the old cage into the first two slots of the cage and enabled RAID. 
I should have put the two disks that were RAID in the old cage in those 
slots.

I realised almost immediately what I had done and swapped the disks 
around into the correct configuration. My originally-RAID pair are now 
correctly in the first two slots with RAID enabled and are none the 
worse for the experience of having briefly having been in the single 
slots. Unfortunately my two originally-single disks are showing up as 
having no partitions according to lsblk.

There was data on those disks that I would ideally like to get back. Do 
I have any hope of undoing whatever damage was done to the disks when 
the cage was switched to RAID mode? I did not write any data to them, 
and crucially I did NOT create a new file system on the disks after 
turning on RAID in the cage before realising what I had done.

A search turned up the gpart program but it looks ancient -- could it 
still help me? gparted may also help but most online info about it is 
about repartitioning disks to prepare for a dual-boot install, not about 
recovering a messed-up partition table (which is what I assume I am 
dealing with here).

The disks were originally formatted ext4 with a single partition taking 
the whole of the disk. Since no file system was created on them and no 
data was written to them while they were in the RAID slots of the cage, 
I'm hoping I can repair things, but looking for ideas of where to start.

Thanks in advance and in hope

Mark

PS Running buster if that's important



New Services in Systemd on Debian

2019-09-20 Thread Mark Fletcher
Hi there

If one wants to create a new systemd service on Buster, for example for 
some home-grown unit, where would be the right place to put the .service 
file? Candidates are obviously /lib/systemd/system or 
/etc/systemd/system but in both cases that would mean dropping files in 
places that really ought to be left to package management (or at least I 
can see an argument for that). Is there a better / safer / 
less-likely-to-be-missed-in-a-backup place to put them?

If not I'll probably create a package for the .service file and make it 
dependent on some systemd package as well as the package containing the 
thing I want to create the service for, but that seems a bit of a fiddle 
for one .service file...

Thanks

Mark



Re: usr merge apparently breaks amanda

2019-09-07 Thread Mark Fletcher
On Fri, Sep 06, 2019 at 03:58:45PM -0400, Gene Heskett wrote:

> The manifestation matches, but having read thru amanda's tools own logs, 
> on that machine, I am not so sure we've pointed the finger in the right 
> direction. From the emailed backup report, it looks as if its crashed 
> the instant amanda touches it. But the amanda tool's own logs don't show 
> any errors.  But the network is gone, stopping the backup.  So are all 
> my ssh -Y logins, but they look as if the keyboard isn't sending if 
> already logged in, but starting another login gets a no route to host 
> msg.
> 
> I think one thing is guaranteed, when I or someone else finds it, it will 
> be a forehead slapper.
> 
> > [1] https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=939411
> > [2] https://lists.debian.org/debian-user/2019/09/msg00219.html
> 
> 

Possibly a stupid question, but... what about other logs?

eg syslog or equivalent?

Mark



Re: What program is capturing key press on root window of X?

2019-07-18 Thread Mark Fletcher
On Fri, Jul 19, 2019 at 0:26 Brad Sawatzky  wrote:

> On Thu, 18 Jul 2019, Harry Putnam wrote:
>
> [ . . . ]
> > Somewhere in the last few months my Debian OS has acquired an input
> > box on upper right of base window (in X) that appears to grab any key
> > presses aimed at the base window and print them in that small input
> > window.


What you are describing sounds to me like a non-English-language character
composer. Eg Japanese uses such to compose kanji characters from components
typed one by one on the keyboard. Chinese uses one too, and so do many
other languages, especially languages that need double-byte characters.

Have you, or could something have, installed any foreign languages
recently?

Mark


Re: [OT] send all email from certain From: addresses into a spam

2019-07-04 Thread Mark Fletcher
On Wed, Jul 03, 2019 at 06:44:42PM +0300, Reco wrote:
>   Hi.
> 
> On Wed, Jul 03, 2019 at 11:39:22AM -0400, Greg Wooledge wrote:
> > 
> > procmail might have worked, but it's more of a pain to learn procmail
> > than it is to write my own filter.  I also get more flexibility this way.
> > 
> > The write-up of my approach is at
> > .
> 
> Maildrop does exactly this. For instance, 
> 
> ^From:.*User Name/
> 
> Transforms to this snippet of .mailfilter:
> 
> if ( /^From:.*User Name/:h )
>   to /dev/null
> 
> Works with exim, postfix and probably qmail OOB.
> 

Could this also be expanded to delete not only mail from a particular 
user, but also any mail *replying to* that mail as well? In other words, 
get rid of not only the mail that starts a thread, but the entire 
thread?

Or would that be a task better done in mutt?

Mark



RESOLVED: Trouble making bootable USB from ISO image

2019-05-04 Thread Mark Fletcher
On Sat, May 04, 2019 at 05:03:42AM +, Russell L. Harris wrote:
> On Sat, May 04, 2019 at 11:39:39AM +0900, Mark Fletcher wrote:
> 
> Nonetheless, I do find "Disks" handy to identity the device associated
> with a USB memory stick just plugged in, and to indicate at a glance
> the partitioning and formatting.

Indeed I used that at one point to see what state it thought the disk 
was in after I'd written the ISO image.

> 
> According to "https://www.debian.org/CD/faq/#write-usb;, all Debian
> i386 and amd64 images are created using the isohybrid technology, so
> that they may be copied to USB flash drives which boot directly from
> the BIOS or EFI firmware of most PCs.  In Linux, copy with "cp 
> " or with "dd if= of= bs=4M; sync".  And be sure
> you are copying to the device (such as "/dev/sdd") and not to a
> partition of the device (such as "/dev/sdd1").

Indeed, that was how I did it. The Windows download page clearly 
indicated that the ISO was for USB sticks or DVDs. Someone else also 
asked about the size of the image, being too big for a single-layer DVD. 
In answer to their question yes the download page did point that out, if 
one read as far as the troubleshooting section where it said something 
like "If your burner says there isn't enough space on the DVD, it means 
you need a dual-layer DVD" or similar." Since I was using a 16GB pen 
drive I wasn't worried about that element of things.

> 
> In the case of a USB flash drive which refuses to boot, you might try
> using "fdisk" to delete all existing partitions and create a new
> partition, followed by "mkfs.msdos" before you copy the ISO image to
> the drive.

I essentially did that at one point while troubleshooting the pen drive, 
except using Gnome's ability to "format" the device (now there's a blast 
from the past)

> 
> If everything else fails, before you toss the drive into the dumpster,
> plug the drive into a Window$ box and allow Window$ to format the
> drive.  Now and then a Window$ box can do something useful.
> 

Would only be an option if you have a windows machine to do it on. 
Someone did suggest installing kvm and launching the ISO as a VM, and 
then using that to burn the ISO onto a USB stick -- that was a creative 
solution I hadn't thought of. Using a VM for the entire experiment in 
the first place could almost have been a solution, actually -- except 
that I want my son to be able to access the machine when I'm not around.

But in the end it turned out the image was fine -- and the very first 
burn I did was probably also fine. Thank god we are past the days of 
DVDs or I would have had half a dozen shiny new coasters for no good 
reason.

The issue was Secure Boot. Specifically, in the case of my test machine, 
it's too old to support secure boot, and in the case of the final target 
machine, secure boot was turned off in the BIOS so that I could boot 
Buster which is what this machine was previously running. When I 
switched the CSM and also turned on Secure Boot (both settings needed to 
be changed) in the BIOS, the machine finally booted from the USB stick, 
and all proceeded smoothly from there.

So the issue in the end, contrary to my instincts, was nothing to do 
with errors or mistakes in writing the image to the pen drive (although 
I'm grateful for the memory refresh around the difference between 
unmounting and ejecting removable drives and the confirmation of the 
common-sense point that a drive needs to not be mounted when you write 
an image to it. In practice I think it actually worked, since the result 
looked healthy when I mounted it, but the advice is good nonetheless)

Thanks all for your help. And now as a result of this exercise I've had 
returned to me another machine which can be my new Buster toy :)

The other thing this has surfaced is all that noise in the logs when 
writing using a motherboard-builtin USB2 socket. When I tried the 
process in a USB3 socket provided by an add-in card, I didn't get all 
that noise in the log with system call quotes and compaints about 
systemd-udevd being blocked for too long. Interesting but I no longer 
think evidence of an actual problem, as I now believe all image writes 
worked correctly.

Mark



Re: Trouble making bootable USB from ISO image

2019-05-03 Thread Mark Fletcher
On Fri, May 03, 2019 at 12:54:17PM -0400, Greg Wooledge wrote:
> On Sat, May 04, 2019 at 01:50:31AM +0900, Mark Fletcher wrote:
> > it auto-mounted.
> 
> > So as root I did:
> > 
> > cp  /dev/sdf
> 
> You need the device NOT to be mounted when you do the cp.  This may mean
> you have to turn off your auto-mounter, or (better still) just log out of
> your Desktop Environment entirely, and log in as root on a text console
> for this operation.
> 
> If the device is mounted, that will interfere with the raw byte writing
> you're doing.  The results are unpredictable.
> 

This makes sense to me; confession time -- I originally ejected the pen 
drive when Gnome auto-mounted it, but then found I couldn't do anything 
with the device at all, so concluded that contrary to my memory it 
needed to be mounted. Web pages I'd found said nothing about unmounting 
the device before writing, so again I thought my memory was faulty. What 
I failed to consider was the difference between unmounting and ejecting.

So this time, I booted the machine, logged out of Gnome taking me back 
to the gdm2 login screen, and switched to a second virtual terminal, 
logging in there as root.

I then plugged in the pen drive. After a second or so, I saw a message 
in the console: 
sd 10:0:0:0 [sdf] No Caching mode page found
sd 10:0:0:0 [sdf] Assuming drive cache: write through

ls /dev/sd* indeed showed a sdf device that hadn't been there before, 
and df confirmed it had not auto-mounted (which makes sense since I 
wasn't logged into Gnome)

so next I did cp  /dev/sdf

The hard disk light flashed for I would say about 20 seconds, then went 
dark. Nothing apparently happened for about 2 minutes, then in the 
console there appeared:
systemd.udevd: blocked for more than 120 seconds

together with advice about a value in /proc/sys to set to 0 to suppress 
that warning. This appeared 2 more times; the process took in total 
about 6 minutes before the command line returned without further console 
messages. Looking in journalctl there were lots of references to what 
look like system calls, and at the end evidence of systemd killing and 
restarting systemd.udevd.

For good measure I executed a sync command, as the debian wiki advice 
for writing ISO images mentions it (admittedly in conjunction with dd) 
and I assumed the worst it would do was nothing. Then I removed the pen 
drive, logged out, and switched back to the virtual terminal with gdm2 
running.

Logging back in to Gnome, I inserted the pen drive and it promptly 
automounted and I am able to see what looks like a healthy file system 
containing a boot dir, efi boot stuff etc, plus the usual Autorun stuff 
a microsoft ISO usually contains. It _looks_ OK to my untutored eye. 
Gparted still does not like it though (saying Unallocated and saying 
there is no partition table).

And, as you've probably guessed by now, the machine still won't boot from it.

Beginning to suspect something wrong with the pen drive, I also tried 
reformatting the pen drive in Gnome with a FAT32 file system, and 
checked I could write text files onto it after doing so and that they 
persisted across unmount and remounts. All was well. Then I repeated the 
above image-copying process and got the same result.

I am very reluctantly being drawn towards the conclusion there is 
something wrong with the ISO image -- this sounds very unlikely to me as 
it was downloaded from Microsoft and while they are not exactly my 
favourite software maker in the world I would expect them not to publish 
a duff ISO, and then leave it sitting there for months without fixing it.

Thomas asked for the output of a particular xorriso command --  here it is:

root@kazuki:/home/mark# xorriso -indev 
~mark/Downloads/Win10_1809Oct_v2_Japanese_x64.iso -report_system_area plain 
-report_el_torito plain 
xorriso 1.4.6 : RockRidge filesystem manipulator, libburnia project.

xorriso : NOTE : Loading ISO image tree from LBA 0
xorriso : UPDATE : 1 nodes read in 1 seconds
libisofs: WARNING : Found hidden El-Torito image. Its size could not be figured 
out, so image modify or boot image patching may lead to bad results.
libisofs: WARNING : Found hidden El-Torito image. Its size could not be figured 
out, so image modify or boot image patching may lead to bad results.
xorriso : NOTE : Detected El-Torito boot information which currently is set to 
be discarded
Drive current: -indev '/home/mark/Downloads/Win10_1809Oct_v2_Japanese_x64.iso'
Media current: stdio file, overwriteable
Media status : is written , is appendable
Boot record  : El Torito
Media summary: 1 session, 2591375 data blocks, 5061m data,  631g free
Volume id: 'CCCOMA_X64FRE_JA-JP_DV9'
xorriso : NOTE : No System Area was loaded
El Torito catalog  : 22  1
El Torito images   :   N  Pltf  B   Emul  Ld_seg  Hdpt  Ldsiz LBA
El Torito boot img :   1  BIOS  y   none  0x  0x00  8 513
El Torito boot img :   2  

Trouble making bootable USB from ISO image

2019-05-03 Thread Mark Fletcher
Hello

I'm trying to use Stretch to write a .ISO image to a USB device. The 
image is the Windows 10 installer (please don't flame me! It's part of 
an education project for my son!) which I downloaded from Microsoft, and 
which they claim should be able to be written to a USB device. Microsoft 
would have me write the ISO using a tool of theirs, but since I don't 
have another Windows device that isn't possible. They say that in that 
case the ISO can be written to a pen drive using OS-specific tools.

I'm attempting to test the image before booting the installer on the 
final computer earmarked for sacrifice to this project. The computer I 
am testing on has successfully booted from a pen drive before and the 
pen drive I am using has been used to boot this computer before, albeit 
not this image.

After downloading the .iso file, I plugged in the USB stick. Because it 
had something recognisable on it already, it auto-mounted. It was 
assigned device /dev/sdf and mounted somewhere under /media, I don't 
remember the exact path.

So as root I did:

cp  /dev/sdf

and waited a while. Eventually the copy finished (the ISO is between 5 
and 6 GB, the capacity of the drive is 16GB). Then I did

eject /dev/sdf

and after a long wait, that command came back with no errors. I then 
removed the drive.

On plugging the drive back in, Stretch can recognise there is a 
filesystem on it and mount it. I can see the usual efi structure for 
booting etc.

BUT, the test computer refuses to recognise it as bootable. The BIOS can 
evidently see the device is there and is an option to boot from, but 
when I try it fails and falls back to the machine's internal hard disk. 
If I disable booting from the hard disk in the BIOS, it fails to boot at 
all with an error message essentially saying "give me something to boot 
from".

There is some discussion on the internet suggesting that the pen drive 
additionally needs to be marked as bootable. I thought that was an old 
pre-GPT partitioning thing, and I also would have thought that if it 
were relevant the .ISO image should contain the necessary settings, but 
hey I'll try anything once... gparted was suggested as the tool to mark 
the partition as bootable but when I fire up gparted it doesn't seem to 
recognise the pen drive as it says the 16GB pen drive is "14.7GB 
unnallocated" and says there is no partition table on it. This despite 
the fact that it auto-mounts when I stick it in while Stretch is 
running...

I'm confused whether the problem is in something I did or didn't do 
while copying the ISO image, or if there is something I need to set in 
the BIOS to get it to boot (I have a dim memory of fannying around with 
various settings a very long time ago when I first got booting from USB 
to work, and the CMOS battery on this motherboard has died at least once 
since then). I'm extremely doubtful that the ISO image just doesn't 
work, and I know for a fact that this computer can be persuaded to boot 
from a pen drive and that this pen drive has been used successfully in 
the past.

Any suggestions of what I could do to diagnose the problem?

Thanks in advance

Mark



RESOLVED: Sudden “operation not permitted”

2019-04-17 Thread Mark Fletcher
On Wed, Apr 17, 2019 at 11:17:04AM +0300, Reco wrote:
>   Hi.
> 
> On Wed, Apr 17, 2019 at 03:25:39PM +0900, Mark Fletcher wrote:
> > I decided to try a reboot, which cleared the upowerd problem and returned
> > load to 0 or close to it. But now, network activity is not working.
> 
> Seems like a coincidence to me.

You were right -- see below

> 
> 
> > Any attempt to ping an IP address (eg my router) results in “Operation not
> > permitted” even when run as root.
> 
> This. About the only known (for me, at least) way to achieve this is to
> send back ICMP Type 3 (Destination Unreachable) Code 9 or 10
> (network/host administratively prohibited).
> It *could* be a SELinux or Apparmor misconfiguration, of course, but
> we'll deal with it later.
> 
> The main question is, who sends ICMP back to your host.
> 

No one -- as it turns out. The cause turned out to be that recent 
changes I had made to this machine to support making its MTA available 
to my VPN introduced a buggy iptables startup script which left my 
iptables settings in a stupid state on boot (blocking EVERYTHING). I'd 
never have thought of that if you hadn't asked me for the output of 
iptables-save. Soon as my eye landed on "iptables" I was like, 
"ohhh, sh*t".

Fixing the bug in the startup script and rebooting (to make sure it will 
work next time) -- all is now well. No hardware fault, I'm very pleased 
to report.

I don't know what caused upowerd to go nuts and probably never will, but 
right now I'm just happy my machine is back up and running properly.

Thanks Reco

Mark



Sudden “operation not permitted”

2019-04-17 Thread Mark Fletcher
(Apologies if this mail comes through poorly formatted for the list; my
main machine is unavailable due to this problem and I’m writing on an
iPad...)

Running Stretch on a circa-2009 self-built machine which has run happily
without serious issues since it was built, apart from the odd annoyance
with Bluetooth audio which the list has already had the pleasure of hearing
about.

This morning I unlocked it before leaving home, and noticed that load was
fairly constant at about 1.0 when it should have been at 0 as the machine
should have been idle. I listed processes with top and noticed that upowerd
was taking up a whole CPU to itself. Normally I wouldn’t notice this daemon
doing its thing.

Google turned up nothing relevant.

I decided to try a reboot, which cleared the upowerd problem and returned
load to 0 or close to it. But now, network activity is not working. Any
attempt to ping an IP address (eg my router) results in “Operation not
permitted” even when run as root. Attempt to access any web page results in
failing to find the site. Attempting to ping a text domain (eg
www.google.com) results in an error message (instantly) saying could not
resolve...

It seems like networking is bejiggered suddenly on this machine. I did not
install updates before rebooting, last time updates were installed was
Sunday, and all has been well since then until this morning, although I did
not reboot during that period until this morning. The machine is attached
to my network via an Ethernet cable running to a WiFi+wired router. That is
obviously working as the machine was able to get an IP address by DHCP
after the reboot (ip route after reboot showed IP address correctly
assigned) but unable to resolve any address and unable to ping an IP
address of the form 192.168.xx.yy with the “Operation not permitted” error.

All the pinging I’ve been trying worked without issues before this problem
occurred, both as root and as an unprivileged user.

Looking through the journalctl since my reboot, I do not see anything that
obviously points to the problem. Network Manager seems to start OK, as far
as I can tell. I don’t see any significant errors except postgreSQL failing
to start, which is normal and I don’t use it. The first sign of trouble (to
my eye, anyway) in the boot log is when services that want the network eg
ntp start trying to interact with it, and failing.

A second reboot produced exactly the same result. Other devices on my
network are working fine.

Putting the upowerd behavior together with the suddenness of this problem,
I’m very afraid that this isn’t really permissions and is in fact some sort
of hardware issue — the machine is 10 years old, was built by me, and has
been in continuous use since it was built... Any suggestions for what I can
do to diagnose?

Thanks in advance

Mark


Re: DVD Creation software

2019-04-12 Thread Mark Fletcher
On Fri, Apr 12, 2019 at 12:48:13PM +0100, Paul Sutton wrote:
> Hi
> 
> 
> I would like to take a set of video files (I have mp4  and am aware they
> need transcoding) and put these on a dvd along with a menu etc, so they
> can be played from VLC or as a normal dvd.
> 
> I did this years ago, it appears the application bombono no longer
> exists (or it is not in the Debian repositories)
> 

I haven't done this for a few years now, but in the early 20-teens, I 
did a _lot_ of this, and I used tovid which is still in the repos at 
least in Stretch.

It has a GUI which is a little buggy if I remember but works. For much 
of my tovid days I used the command line tools, and migrated to the GUI 
towards the end. You can do quite a lot with it -- all you mentioned and 
more. You can get quite sophisticated with the DVD menus and so on if 
you've got the patience.

Back in the day it was supported by a chap who went by the pseudonym 
"grepper", who was extremely helpful. I don't know if he's still 
supporting the tool or not. He was looking for someone to hand it over 
to back in the day, dunno if he ever found someone.

tovid did a way better job than anything else I tried of transcoding 
video for my most common usecase -- that of adjusting the TV format from 
PAL to NTSC. I needed to do this because my British relatives had a 
habit of buying DVDs for my kids in the UK, and we lived in Japan. Back 
in those days dual-TV-format DVD players were a rarity.

Check out tovid and if grepper is still supporting it, tell him I said 
hi!

Mark



Re: Error Message

2019-04-11 Thread Mark Fletcher
On Thu, Apr 11, 2019 at 07:03:08PM +0200, Michael Lee wrote:
> Hello, I would like to know what I am supposed to do about this error
> message. Would appreciate guidance.
> M Lee

> 
> 
> Nicht alle Paketquellenindizes konnten heruntergeladen werden
> 
> Die Software-Paketquelle steht möglicherweise nicht mehr zur Verfügung oder 
> ist aufgrund von Netzwerkproblemen nicht erreichbar. Sofern für diese 
> Software-Paketquelle noch eine ältere Paketliste verfügbar ist, wird diese 
> verwendet. Anderenfalls wird diese Software-Paketquelle gänzlich ignoriert. 
> Bitte prüfen Sie Ihre Netzwerkverbindung und vergewissern Sie sich ebenfalls, 
> dass die Adresse der Software-Paketquelle korrekt in den Einstellungen 
> eingetragen ist.
> 
> 
> The repository 'http://ftp.de.debian.org/debian stretch/updates Release' does 
> not have a Release file.Updating from such a repository can't be done 
> securely, and is therefore disabled by default.See apt-secure(8) manpage for 
> repository creation and user configuration 
> details.http://ftp.de.debian.org/debian/dists/stretch-updates/InRelease: The 
> key(s) in the keyring /etc/apt/trusted.gpg are ignored as the file is not 
> readable by user '_apt' executing 
> apt-key.http://ftp.de.debian.org/debian/dists/stretch/Release.gpg: The key(s) 
> in the keyring /etc/apt/trusted.gpg are ignored as the file is not readable 
> by user '_apt' executing apt-key.
> 
> 

Looking at the error message (the English part, I can't read the German) 
I suspect the issue is the second part of the message, and the first 
part complaining that ftp.de.debian.org doesn't have a Release file is 
a red herring. It looks like ownership or permissions on your 
/etc/apt/trusted.gpg file are dodgy. Have a look at ownership and 
permissions on that file and, if it's not obvious to you what's wrong, 
post here the output of ls -l /etc/apt/trusted.gpg and hopefully it will 
be obvious to someone on here.

Have you been making changes to your apt configuration recently? If you 
have done anything in that area recently, describe that too as that may 
have a bearing on the problem.

HTH

Mark



Re: Simple Linux to Linux(Debian) email

2019-04-11 Thread Mark Fletcher
On Thu, Apr 11, 2019 at 11:51:44AM -0400, Greg Wooledge wrote:
> On Fri, Apr 12, 2019 at 12:42:12AM +0900, Mark Fletcher wrote:
> > > Why not use a dynamic DNS provider?
> 
> > The problem is how I know that the IP 
> > address has changed and hence the DNS mapping needs updating.
> 
> By doing it correctly.
> 
> <https://mywiki.wooledge.org/IpAddress> has an example for Debian.
> Specifically Debian 8 (jessie), but it should still work in newer
> releases.
> 
That page seems to be all about detecting the IP address; thanks, but 
that part was solved before I even opened this thread. Perhaps I phrased 
poorly earlier in the thread. The issue isn't how I tell what my IP 
address is, the issue is how the machine communicates the fact that it 
has changed to me even when I'm not logged into it. And this thread 
solved that.

You did say one very interesting thing that brought me up short on that 
page though; "I need to get my IP address so I can communicate it to my 
dynamic DNS provider" . I guess I need to look into that 
-- if that could be automated, then yes a dynamic DNS provider would be 
another, more automated, way to solve my underlying problem (to which I 
now have a solution I'm happy with)

Thanks

Mark



RESOLVED: Simple Linux to Linux(Debian) email

2019-04-11 Thread Mark Fletcher
On Mon, Apr 08, 2019 at 02:14:33PM +0100, Thomas Pircher wrote:
> Mark Fletcher wrote:
> > mutt won't let me go back and edit the subject line.
> 
> Hi Mark,
> 
> > Short version: Is it reasonable to expect a piece of software to exist
> > that establishes a direct connection to a "remote" MTA and delivers mail
> > there for delivery, without also offering up mail reception
> > capabilities?
> 
> Yes, have a look at the dma or nullmailer packages.  There used to be
> more of these programs in Debian (ssmtp, for example), but on my system
> (Buster) only those two seem to have survived.
> 
> You could also use one of the big MTAs and configure them to listen to
> local connections only, and/or block the SMTP ports with a firewall, but
> both dma and nullmailer do their job just fine. Besides, they are much
> simpler to configure.
> 

So this issue is now resolved; in the end I went with the sSMTP package, 
which pretty much seems precisely designed for situations like mine. I'm 
slightly alarmed by its orphan status in Debian, preventing it from 
getting into testing -- if I had more time on my hands I'd sign up to 
maintain it. But it was extremely simple to build and worked perfectly 
on my LFS machine once built.

The only thing I needed to do was add a PREROUTING rule on my Stretch 
machine's iptables configuration because my Stretch machine's exim4 is 
not listening on the VPN interface, and I didn't want to change its 
configuration to make it do so because the VPN isn't always up and I 
don't want exim4 failing to start because the VPN hasn't been started 
when it starts during a reboot. So instead I am re-routing traffic 
coming into the Stretch machine via the VPN on the SMTP port to the 
machine's local physical IP address, where exim4 is listening. By NOT 
mucking around in POSTROUTING with the source address of the packets, 
the source remains the VPN IP address of the client machine, and thus 
replies from exim4 are correctly routed back through the VPN to the 
client. Perfect.

I made confusion for myself by initially trying to set the target 
address to 127.0.0.1 instead of my local physical IP; that didn't work, 
I suspect because the packets then become invalid because they are 
claiming to be local packets but have a source address that is 
off-machine. I contemplated for a few minutes what I'd have to do to 
work around that, and concluded that simply mapping to the physical 
private IP of the machine was cleaner, and allows for different machines 
on the VPN to leverage my Stretch box as a mail relay in the future 
should I have the need to do that (I don't today).

It works perfectly -- and more to the point of this thread, sSMTP was 
extremely simple to compile, is extremely simple to use, and does the 
job perfectly.

Thanks to all who made suggestions. I did also download dma and will 
play around with that for learning's sake, but for now I'm going with 
sSMTP as a solution to this particular problem.

Mark



Re: Simple Linux to Linux(Debian) email

2019-04-11 Thread Mark Fletcher
On Tue, Apr 09, 2019 at 02:34:30AM +0200, Jan Claeys wrote:
> On Mon, 2019-04-08 at 21:33 +0900, Mark Fletcher wrote:
> > I've created a very simple script that is capable of parsing the
> > output of "ip addr" and comparing the returned ip address for the
> > relevant interface to a stored ip address, and thus being able to
> > tell if the IP address has changed. What I'd like to do now is make a
> > means for the LFS box to be able to notify me of the fact that the
> > external-facing IP address has changed. 
> 
> Why not use a dynamic DNS provider?
> 
> 
Primarily because it wouldn't solve my problem. IIUC it would allow me 
to map a domain name to the IP address assigned to my home internet 
connection. That means that when the IP address assigned to my internet 
connection changes, I can simply alter the mapping and my VPN clients 
start working again, without any configuration change on them. Great, 
but that isn't the problem. The problem is how I know that the IP 
address has changed and hence the DNS mapping needs updating. I don't 
see any way that a dynamic DNS service is going to know when my ISP 
arbitrarily re-assigns my IP. I need the machine that is being assigned 
the IP address to be able to tell me when it changes, and that is what 
this thread was about (specifically the "how it would be able to tell 
me" -- I'd figured out the "how it would know" for myself).

But thanks for trying to help anyway!

Mark



Re: Tracking the next Stable release

2019-04-09 Thread Mark Fletcher
On Mon, Apr 08, 2019 at 05:42:46PM -0300, Francisco M Neto wrote:
> Greetings!
> 
> 
> https://twitter.com/debian_tracker
> 
Nice! What does the level of release-critical bugs need to fall to 
before a release can happen -- it's not zero is it?

Mark



Re: Simple Linux to Linux(Debian) email

2019-04-08 Thread Mark Fletcher
On Mon, Apr 08, 2019 at 02:39:35PM +0100, Joe wrote:
> On Mon, 8 Apr 2019 21:33:03 +0900
> Mark Fletcher  wrote:
> 
> 
> > 
> > My image of an ideal solution is a piece of software that can present 
> > email to a remote MTA (ie an MTA not on the local machine) for
> > delivery, but is not itself an MTA, and certainly has no capability
> > to listen for incoming mail.
> > 
> 
> a) Sendmail. Not the full-featured MTA, but the utility.
> https://clients.javapipe.com/knowledgebase/132/How-to-Test-Sendmail-From-Command-Line-on-Linux.html
> 

Oh ah. Right, I hadn't separated the two in my mind. This may also do 
the job well I'm guessing.

> b) Write it yourself. If you can do simple scripting then you can write
> something that talks basic SMTP to a remote SMTP server.
> 
> Here's basic unencrypted SMTP:
> https://my.esecuredata.com/index.php?/knowledgebase/article/112/test-your-smtp-mail-server-via-telnet
> 



Yes, I had considered that too, and was going to script something up 
over a telnet session (inside my home LAN, albeit through a VPN to be 
able to tunnel back through a NAT'ing router) if this thread didn't turn 
up anything useful. But it did. :)

Also, I'm an engineer by training and follow the principle of re-use -- 
if there's a tool out there that does what I want I'd rather use it than 
write a new one. I admit I sometimes stray from that in the name of 
learning, but on this occasion I just want to solve a problem and move 
on.

> 
> c) Use a standard MTA and tell it not to listen to anything from
> outside your network. Use your firewall to not accept SMTP on the WAN
> port, and unless you have previously received email directly then the
> SMTP port shouldn't be open anyway. 
> 
> Use the MTA's configuration to listen only to localhost. Restart it and
> check where it's listening with netstat -tpan as root. 
> 
> That way you have two mechanisms to prevent access, even if you
> misconfigure one of them you should still be OK. After you have the MTA
> running and sending email where you want it to go, use ShieldsUp!! on
> https://grc.com to check which ports are open to the outside. Select
> 'All Service Ports' to check TCP/1-1055.
> 

Yes, agreed, this should also work. One thing I didn't mention in my 
original post is that I have to build all software for the "client" 
machine from scratch, and I'd expect a full-strength MTA to be a large 
project to build from source (many and potentially complex dependencies 
and so on), while a simple tool is likely to have a smaller and less 
complex dependency tree. Also because security is important on this box, 
every package I add needs careful consideration to make sure it doesn't 
compromise that -- again nudging me towards the smaller, simpler tool 
with fewer dependencies.

Thanks for your suggestions.

Mark



Re: Simple Linux to Linux(Debian) email

2019-04-08 Thread Mark Fletcher
On Mon, Apr 08, 2019 at 02:14:33PM +0100, Thomas Pircher wrote:
> Mark Fletcher wrote:
> > mutt won't let me go back and edit the subject line.
> 
> Hi Mark,
> 
> Yes, have a look at the dma or nullmailer packages.  There used to be
> more of these programs in Debian (ssmtp, for example), but on my system
> (Buster) only those two seem to have survived.
> 

Thanks, of those dma looks like a perfect match and nullmailer also 
would work.

> You could also use one of the big MTAs and configure them to listen to
> local connections only, and/or block the SMTP ports with a firewall, but
> both dma and nullmailer do their job just fine. Besides, they are much
> simpler to configure.
> 

Yes, I could -- but I'd feel safer in the presence of my own capacity 
for stupid mistakes using a piece of software that just can't listen for 
mail, in this particular scenario. So dma or nullmailer both fit the 
bill. I will pore over their docs as well as sSMTPs and see what comes 
out the best.

Thanks a lot for your help

Mark



Re: Simple Linux to Linux(Debian) email

2019-04-08 Thread Mark Fletcher
On Mon, Apr 08, 2019 at 07:54:30AM -0500, Ryan Nowakowski wrote:
> You might check out sSMTP[1]
> 
> [1] https://wiki.debian.org/sSMTP
> 
Thanks, looks like sSMTP will do the job. As was pointed out elsewhere 
in the thread, it seems to have been dropped from Buster, but that is no 
barrier for me as I can build it myself on the LFS machine.

Thanks a lot

Mark



Simple Linux to Linux(Debian) email

2019-04-08 Thread Mark Fletcher
Hello all

As I wrote this I began to consider this is slightly OT for this list; 
my apologies for not putting OT in the subject line but mutt won't let 
me go back and edit the subject line.

Short version: Is it reasonable to expect a piece of software to exist 
that establishes a direct connection to a "remote" MTA and delivers mail 
there for delivery, without also offering up mail reception 
capabilities? If it is, what would that software be? Or alternatively, 
is there a failsafe way to configure one of the MTAs (I have no strong 
allegiance to any MTA, although the only one I have experience with is 
exim4) such that even if I miss a configuration step it won't be 
contactable from outside? To be clear, I only wish to be able to send 
mail in one direction in this scenario...

The more detailed background:

My ISP has recently developed the unfortunate habit of changing my IP 
address moderately frequently. They're allowed -- I'm cheap so I haven't 
paid for a fixed IP. I'm shortly going to be moving so now really isn't 
a good time to reconsider that position.

They still aren't changing it crazily frequently, but I now run an 
OpenVPN server at home and it is a bit inconvenient when they change my 
home IP and I don't notice before going on a business trip or something.

I'd like to set up an alert that lets me know when my external IP 
address has changed.

The box that is in a position to notice that the IP address has changed 
is on the outer edge of my network connected directly to the internet. 
It runs LFS.

Deeper inside my network, accessible from the LFS box via the VPN, is a 
Debian Stretch machine where I do most of my work.

I've created a very simple script that is capable of parsing the output 
of "ip addr" and comparing the returned ip address for the relevant 
interface to a stored ip address, and thus being able to tell if the IP 
address has changed. What I'd like to do now is make a means for the LFS 
box to be able to notify me of the fact that the external-facing IP 
address has changed. 

My Debian machine runs exim4 and has a reasonably basic setup that 
allows it to accept mails from other machines on the network (although I 
may need to fiddle around with getting mail to come through the VPN) and 
deliver it either locally or using a friendly mail provider as a 
smarthost. I've successfully sent and received mail between this machine 
and a Buster machine on the same network, those two machines can see 
each other without the VPN. The Buster machine was also running exim4.

The LFS machine is, by design, very sparsely configured with only 
software I truly needed installed. I am willing to add software but wish 
to minimise the risk of installing something that opens up 
external-facing vulnerabilities as much as possible. What I'd really 
like is a piece of software that can reach out to my Stretch machine 
through the VPN to present an email for delivery without offering a 
local MTA that, improperly configured, might end up listening to the 
outside world and thus present a security risk.

I've looked at sendmail, postfix and of course exim4, and these are MTAs 
which could certainly do the job but which also present the risk of 
listening to the internet, especially if I do something stupid in the 
configuration which is entirely feasible. And from some basic tests I 
did on my Stretch machine I think the mail command expects there to be a 
local MTA for it to talk to...

My image of an ideal solution is a piece of software that can present 
email to a remote MTA (ie an MTA not on the local machine) for delivery, 
but is not itself an MTA, and certainly has no capability to listen for 
incoming mail.

Thanks in advance

Mark



Re: Acess Devian 9 laptop by another devica via wifi

2019-04-01 Thread Mark Fletcher
On Sat, Mar 23, 2019 at 08:31:34AM -0500, Tom Browder wrote:
> On Sat, Mar 23, 2019 at 5:12 AM Tom Browder  wrote:
> >
> > > > Is there any reliable way to either (1) always connect via the LAN or 
> > > > (2)
> > > > make the laptop broadcast its own LAN so I can login to it wirelessly 
> > > > from
> > > > the iPad?
> 
> Solved!!
> 
> I tried using my iPhjone as a personal hotspot and connected the
> laptop AND iPad to it and I can ssh into the laptop with no problems.
> 
> -Tom
> 

I'm pleasantly surprised to hear that worked. I wouldn't have said it 
was a given that an iPhone personal hotspot would do routing between 
multiple WiFi devices connected to it. Obviously between the WiFi and 
the phone, but...

Of course the downside of this approach is the iPhone itself switches 
away from WiFi when you do this, and any data usage *it* does while you 
are working goes over the phone, potentially costing you money... I once 
ran up a >$2000 phone bill while roaming in HK because I didn't realise 
an online broker's app was still running on the phone, streaming 
prices...

Mark



Re: Bluetooth audio problem

2019-04-01 Thread Mark Fletcher
On Sat, Mar 23, 2019 at 08:24:30AM -0500, Nicholas Geovanis wrote:
> On Fri, Mar 22, 2019 at 9:29 AM Mark Fletcher  wrote:
> 
> >
> > So this turned out to be a weirdie -- if I dropped the "sudo" my
> > original command worked.
> > So now, suddenly from that update that started this thread, if I run the
> > pactl command as an unprivileged user, it works fine.
> 
> 
> Is it possible that you had previously started pulseaudio as root, and
> could no longer communicate with it as an unprivileged user?
> I ask this having been a pulseaudio victim myself sometimes.
> 
> 

Hmm, interesting idea, but the situation I was previously in pertained 
over a period since Stretch became Stable until shortly before my 
original mail in this thread (sometime in February if I recall 
correctly). Over, naturally, multiple reboots.

For that period, I had to use sudo when issuing the pactl command (in 
Jessie and previously, the pactl command wasn't necessary at all).

So I guess I could have had some sort of configuration which repeatedly 
put me in that situation on every reboot, and the update that "created 
the problem" actually fixed whatever *that* problem was... otherwise, no 
I don't think so.

Thanks for the suggestion though

Mark



Re: Bluetooth audio problem

2019-03-23 Thread Mark Fletcher
On Fri, Mar 22, 2019 at 08:44:46PM +0100, deloptes wrote:
> Mark Fletcher wrote:
> 
> > So this turned out to be a weirdie -- if I dropped the "sudo" my
> > original command worked.
> > 
> > So now, suddenly from that update that started this thread, if I run the
> > pactl command as an unprivileged user, it works fine. I have no idea why
> > it changed but I'm just happy I have it working again.
> 
> you can mark also as solved, if solved
> 

True, I could have. But I don't think it will kill interested people who 
follow after to read a 3-mail thread to see the resolution.



Re: Bluetooth audio problem

2019-03-22 Thread Mark Fletcher
On Sun, Mar 03, 2019 at 06:04:05PM +0100, deloptes wrote:
> Mark Fletcher wrote:
> 
> > Hello
> > 
> > Since upgrading to Stretch shortly after it became stable, I have had to
> > execute the following after a reboot before being able to connect to
> > bluetooth devices using the Gnome bluetooth applet:
> > 
> > $ sudo pactl load-module module-bluetooth-discover
> > 



> > Now, when I run the above command it is erroring out with:
> > 
> > xcb_connection_has_error() returned true
> > Connection failure: Connection refused
> > pa_context_connect() failed: Connection refused
> > 
> 
> 
> When I want to debug pulse I do
> 
> echo "autospawn = no" > ~/.pulse/client.conf
> 
> kill PA and run it from command line with -v option you can also
> use --log-level (man pulseaudio)
> 
> perhaps you can see what is the problem there. If not it might be dbus issue
> with permissions - check the dbus settings
> 
> Also some times it helps to remove the ~/.pulse directory and restart
> pulseaudio.
> 

So this turned out to be a weirdie -- if I dropped the "sudo" my 
original command worked.

So now, suddenly from that update that started this thread, if I run the 
pactl command as an unprivileged user, it works fine. I have no idea why 
it changed but I'm just happy I have it working again.

Mark



Bluetooth audio problem

2019-03-03 Thread Mark Fletcher
Hello

Since upgrading to Stretch shortly after it became stable, I have had to 
execute the following after a reboot before being able to connect to 
bluetooth devices using the Gnome bluetooth applet:

$ sudo pactl load-module module-bluetooth-discover

Without that command, needed once only after each reboot, the Gnome 
applet is unable to connect to any bluetooth audio devices, eg my 
headphones to be used as an audio sink, or my iPhone to be used as an 
audio source. Once that command has been issued once, everything works 
as it should, and continues to do so until the next reboot.

I've been away for a couple of weeks and so hadn't installed updates to 
my stretch installation for something like 3 weeks, until Saturday this 
week when I installed updates. Unfortunately I didn't pay enough 
attention to exactly what was upgraded but I _believe_ I saw udev in the 
list of things getting upgraded.

Now, when I run the above command it is erroring out with:

xcb_connection_has_error() returned true
Connection failure: Connection refused
pa_context_connect() failed: Connection refused

Googling for this has only turned up old information which does not seem 
to relate to the problem I am facing. In most cases the context is audio 
not working; in my case audio output through speakers plugged into the 
sound card is working fine, USB mic connected by a wire is working 
fine, the only problem is anything bluetooth.

Bluetooth on this machine is provided by a USB bluetooth dongle which I 
have been using for ages.

Can anyone suggest steps to diagnose?

TIA

Mark



Re: Session Recording

2018-12-27 Thread Mark Fletcher
On Fri, Dec 28, 2018 at 0:46 Ilyass Kaouam  wrote:

> Hi,
>
> Please if you know any opensource tools he can recording session ?
> Freeipa can do this ?
>
> Thank's
>
> Depends what you mean by session.

For textual record of a series of commands and their output, as might be
useful over ssh, look into the “script” command.

For recording a graphical desktop including audio voiceover, I find
“simplescreenrecorder” to be very good.

Both are packaged for stretch.

Mark


Fwd: You removed Weboob package over pollitical reasons?Whole Internet laughs at you

2018-12-24 Thread Mark Fletcher
On Tue, Dec 25, 2018 at 7:56 Miles Fidelman 
wrote:

> Not for nothing...


Please don’t top post.

but I'd never heard of weboob before.  Looks like a
> rather powerful set of functions.  All the controversy has probably
> provided some much needed visibility.
>
> Personally, I don't care about the packaging - I tend to find that
> packagers tend to just muck things up.  For anything except the most
> common stuff, I'll always stick with >make;make install
>

In that case, why use Debian? The packaging (and the policies to support
and govern it) are what makes Debian, Debian. Might as well use LFS if
you’re going to make ; make install everything anyway.

(Not that make ; make install is in any way evil; it’s great when it’s
needed, it’s just not needed very much by users in Debian)

Happy Holidays to all.

Mark

With apologies to Miles for previously accidentally replying to him
directly instead of replying only on-list...


Fwd: Recommendation for Virtual Machine and Instructions to set it up?

2018-12-06 Thread Mark Fletcher
Darn it, forgot to monkey with the headers when replying from gmail...
please see intended list reply below.

-- Forwarded message -
From: Mark Fletcher 
Date: Fri, Dec 7, 2018 at 8:19
Subject: Re: Recommendation for Virtual Machine and Instructions to set it
up?
To: 




On Fri, Dec 7, 2018 at 6:03 deloptes  wrote:

> rhkra...@gmail.com wrote:
>
> > What would you recommend for the software to run a VM under Jessie (that
> > would probably run Ubuntu), and can you recommend a fairly simple set of
> > instructions to first set up the VM, and then at least begin the install
> > process to that VM.
>
> Recently I am using headless or sometimes visual virtualbox. If you want it
> headless virtualbox is better. There are packages to download. I don't know
> if and which work on jessie.
> I do not think you need a backup if you install the packages.



I second this. I’ve been using virtualbox since around etch or so I think —
anyway a while. Since you’re on Jessie it should be in the repos. From
stretch on you need to add a repo to get it as it fell out if the Debian
repo. But it is still in Jessie — at least it was when Jessie was stable.

I have always found virtualbox surprisingly easy to set up and use — a lot
of things that as a noob I expected to be hard just weren’t. There’s a good
visual setup screen for creating new VMs and the documentation is quite
good as I recall.

The only thing I’ve never got working properly is 3D acceleration.

HTH

Mark


Re: issues with stretch, issue 2 from many

2018-11-30 Thread Mark Fletcher
On Sat, Dec 1, 2018 at 0:59 Greg Wooledge  wrote:

>
> Now, please answer the following questions:
>
> 1) What version of Debian are you running?
>
> 2) How do you log in to your computer?  If it's by a display manager
>(graphical login), which one is it?
>
> 3) How do you start the X window system?
>
> 4) How have you configured whichever dot files are relevant to #2 and #3?
>
> 5) What is the actual problem you are having?
>
>
> #2 IS CRITICALLY IMPORTANT and I have never yet seen you answer it.
> Maybe I missed it somewhere in this incredibly drawn-out and unfocused
> thread, but I don't think so.
>
> He said he uses xdm.


Re: selinux and debian squeeze 9.5

2018-11-03 Thread Mark Fletcher
> squeeze! You could be very lucky and someone with the same outdated,
> no longer supported distribution and experiencing the same problem
> comes along. I wouldn't count on it though.
>
> > Any suggestions?
>
> The obvious.
>

Speaking of obvious — the OP says 9.5, so presumably they _meant_ to say
Stretch — no?

Mark


Re: Debian installation

2018-09-02 Thread Mark Fletcher
On Sun, Sep 02, 2018 at 09:34:40PM -0700, Harold Hartley wrote:
> I had ordered myself a Debian dvd 9.5 and everything was installing
> great until it came to scan the mirrors.It seems I was not able to scan a 
> mirror so that I would be able to apt
> an app, but I tried many mirrors and nothing.Can someone on here help me with 
> a solution to get it to scan a mirror
> successfully.I ordered the dvd from a link from Debian website and nothing is 
> wrong
> with the dvd.
> --

Hello Harold, and welcome to Debian!

If there are _no_ mirrors available, that sounds to me more like a 
problem with your local network setup, most likely your network device 
in your computer is not recognised properly. By any chance is it WiFi? 
Some WiFi devices especially in laptops need proprietary firmware and 
don't work out of the box off the first DVD. There is a 
"firmware-included" installer that might have a better chance of working 
in that case. I wouldn't _generally_ expect that to be an issue if you 
are connected by a wired ethernet connection, but wouldn't totally rule 
it out.

If you can give us more information about your hardware and how you 
access the internet, the community should be able to help more.

Mark



Re: Confused by Amanda

2018-09-02 Thread Mark Fletcher
On Sun, Sep 02, 2018 at 11:46:44AM -0400, Gene Heskett wrote:
> On Sunday 02 September 2018 06:27:01 Dan Ritter wrote:
> 
> > Amanda is not good for the situation you describe.
> 
> No its not ideal in some cases,, which is why I wrote a wrapper script 
> for the make a backup portions of amanda. With the resources and configs 
> that existed at the time that backup was made actually appended to the 
> end of the vtape, pretty much an empty drive recovery is possible. It 
> appends the /usr/local/etc/amana/$config, 
> and /usr/local/var/amanda/record_of_backups to each vtape it makes. So 
> those 2 files can be recovered with tar and gzip, put back on a freshly 
> installed linux of your favorite flavor, and a restore made that will be 
> a duplicate of what your had last night when backup.sh was ran.
> 

Thanks Gene, I was hoping you would pipe up but didn't want to throw the 
spotlight on you if you weren't inclined to. This is exactly what I'm 
after so I will definitely check it out.

Thanks also to Dan and Jose, I can see what you mean and it makes much 
of the Amanda documentation make more sense now. But as I mentioned, my 
configuration currently isn't an end state and I'm planning to expand it 
to cover other machines on my network, at which point Amanda will make 
more sense. I get the concept of two Amandas, one to backup the Amanda 
server of the first, but then you're into a "turtles all the way down" 
scenario, aren't you? Just seems overkill when one Amanda can look after 
its own server as well, albeit with some jiggerypokery which Gene has 
kindly cast light on.

So I think we can agree, Amanda's expected usage model is ideally for 
situations where there are multiple machines to back up, you designate 
one machine the Amanda server (presumably the one with the easiest / 
fastest access to the backup media) and accept that that machine needs 
special, usually separate, arrangements for _its_ backup. But it's 
_possible_ with attention to the right details such as things Gene has 
pointed out, to include the Amanda server machine itself in the backup.

Thanks all, especially Gene for, I suspect, saving me a lot of work.

Mark



Confused by Amanda

2018-09-02 Thread Mark Fletcher
Hello

I use Amanda for daily backups on Stretch. I found it not too difficult 
to set up once I got my head around its virtual tape concept.

Recently, prompted by not very much, I have started to question whether 
having these backups really put me in a position to restore the machine 
if I need to.

I recently messed up some files and decided to resort to the backup to 
recover them. I was able to do so, but the process left me wondering if 
I would really be in a position to do so in all cases. For example, 
Amanda configuration is in /etc/amanda -- what if /etc was what I needed 
to restore? Similarly, I gather there are files under /var/lib/amanda -- 
what happens if /var is damaged?

I have not been able to understand from the Amanda documentation really 
all that I need to have in place to be able to expect to recover from, 
say, a disk replacement after catastrophic failure. I'm imagining, main 
disk goes to data heaven, I buy a new one, install Stretch again fresh, 
and now I want to re-install packages and restore their backed-up 
configuration as well as restore my data in /home etc. I know there are 
a few experienced users of Amanda on this list -- can anyone help me, or 
perhaps point me to a good resource that explains it, or even if there's 
a section in the documentation I've missed that makes it clear?

I guess a key point is, in my configuration, the same machine is both 
Amanda server and Amanda client. I guess I may expand this in the future 
to have this machine manage backups for other machines, but at the 
moment that is not happening. Of course, the disk that houses the Amanda 
virtual tapes is off-machine. 

What I'm looking for is along the lines of "your nightly backup routine 
needs to be run amdump, then rsync this, this and this directory 
somewhere safe" or whatever it is. Or alternatively "don't be an idiot 
you don't need to do any of that, amanda is magic in this, this and this 
way".

Thanks

Mark



pactl and bluetooth

2018-08-26 Thread Mark Fletcher
Hello the list

I'm running stretch amd64, upgraded from at least jessie and I think 
wheezy -- memory's a bit hazy now. I use Gnome on this machine.

Every time I reboot I find I can't connect my bluetooth headphones to 
the computer. In the Gnome bluetooth applet, when I click the slide 
button to connect, it immediately slides from ON to OFF without seeming 
to do anything.

A while back the archives of this list helped me discover that in order 
to fix this I need to run as root:

pactl load-module module-bluetooth-discover

Doing this immediately makes it possible to connect the headphones.

Can anyone point me at documentation of how I could arrange things so 
this happens automatically and I don't have to type it by hand? I know 
there's a way but I am struggling to find, or remember, how.

Thanks

Mark



Re: yabasic problem

2018-08-20 Thread Mark Fletcher
Isn’t the problem that you misspelled “experimental” in your original file
paths?

Mark
On Mon, Aug 20, 2018 at 21:13 Richard Owlett  wrote:

> On 08/20/2018 02:35 AM, Thomas Schmitt wrote:
> > David Wright wrote:
> >  [snip]
> >> Would you agree, though, that "BASIC" is the language that must
> >> have the biggest contrast between its well-endowed versions and
> >> the most dire cr*p.
> >
> > Well, back then i perceived HP BASIC as the best language of all. It made
> > me boss on all those expensive HP machines (from 9845B to 9000/320).
> > But C ran on all Unix workstations. And as soon as i became ambidextrous
> > enough, i fell in love with the display manager of the Apollo Domain
> DN3000.
> >
> > Microsoft's Visual Basic is said to have surpassed HP BASIC in the years
> > later.
> >
> >
> >> Where would yabasic fit?
>
> I think it would rate fairly well. I browsed the BASICs in the
> repository and seemed best matched to my preferences, specifically no GUI.
>
> >
> > It seems to be inspired by C (see the syntax of "open"). But why use such
> > a C-BASIC when there is gcc, gdb and valgrind ?
> >
> 'Cause the last time I used C was about 4 *DECADES* ago ;}
> Although I have programmed, I would claim to be a *programmer*.
>
>
>


Re: new install of amd64, 9-4 from iso #1

2018-06-10 Thread Mark Fletcher
On Sun, Jun 10, 2018 at 04:44:16PM -0400, Gene Heskett wrote:
> Greetings all;
> 
> I have the dvd written, and a new 2T drive currently occupying 
> the /dev/sdc slot.
> 
> What I want, since the drive has been partitioned to /boot, /home, /, and 
> swap, is 1; for this install to not touch any other drive currently 
> mounted, and 2; use the partitions I've already setup on this new drive 
> without arguing with me.
> 
> and 3: to  treat the grub install as if there are no other drives hooked 
> up. I don't need grub to fill half the boot screen with data from the 
> other drives.
> 
> How do I best achieve that?
> 
> Thanks a bunch.
> 

1 and 2 are simply a matter of giving the sensible answers to the 
appropriate questions from the installer. I can't remember exactly what 
the options are called but there is an expert partition mode that allows 
you to partition the disk how you want and I'd use that to verify the 
partitions are as you want and not change anything, map the parts of the 
filesystem you want to go on each partition in the installer, then 
continue.

If you don't tell it to install anything to the other disks then it won't.

For 3, I think I need to defer to the grub experts, not sure if you will 
have to preseed your install or if there is an easier way.

Mark



Re: Install matplotlib in Debian 9

2018-06-10 Thread Mark Fletcher
On Fri, Jun 08, 2018 at 09:21:23PM +0200, didier gaumet wrote:
> Le 08/06/2018 à 20:51, Markos a écrit :
> > Hi,
> > 
> > I'm starting my studies with Python 3 on Debian 9.
> > 
> > I have to install the matplotlib module, but I'm in doubt what is the
> > difference to install with the command:
> > 
> > pip3 install matplotlib
> > 
> > or
> > 
> > apt-get install python3-matplotlib
> > 
> > Is there any difference in the packages that are installed?
> > 
> > Thanks,
> > 
> > Markos
> > 
> 
> I suppose that this comparable to install a Firefox extension via
> apt-get or from Firefox: apt-get will provide an older version
> system-wide while pip3 will provide a more up-to-date version only in a
> user environment?
> Do not take my word for it, though: I have absolutely no competence in
> Python.
> 

Using pip is like building non-python software from source when it is 
already packaged for Debian -- possible, and occasionally necessary in 
some circumstances, but to be avoided where you can. If you use the 
Debian packaging system, Debian knows what you have installed and what 
libraries your system is dependent on, etc, and won't do anything to 
break your system for example when you upgrade. But if you install using 
pip Debian doesn't know anything about it (so won't upgrade it for you 
when you upgrade). In particular, but not limited to, upgrading a system 
that has a mix of manually-built and Debian-installed packages can be a 
pain.

I can tell you from experience the version of matplotlib in Debian 9, 
while not the latest and greatest, is plenty good enough. I use it quite 
a lot.

If this is you making a foray into data science with python, by the way, 
I also strongly recommend the pandas library (also in Debian, and again 
the version in Stretch is not latest but plenty new enough).

Mark



Re: The Internet locks up Buster

2018-06-07 Thread Mark Fletcher
On Thu, Jun 07, 2018 at 02:13:17PM -0400, Borden Rhodes wrote:
> > I.e. 12309 bug is back. It's obscure and presumably fixed (at least four
> > times fixed) bug that happens with relatively slow filesystem (be it
> > SSD/HDD/NFS or whatever) and a large amount of free RAM. I first
> > encountered the thing back in 2.6.18 days, where it was presumably
> > implemented (as in - nobody complained before ;).
> 
> Thank you, Reco and Abdullah, for providing some very helpful
> information. I'll retest with the kernel parameters. I went over to
> https://bugzilla.kernel.org/show_bug.cgi?id=12309 and it seems they've
> closed the bug and/or given up on this. Is there any value in
> continuing to whine about this problem? I mean, it's not like
> large-capacity RAM is going away.
> 

I feel like we are missing a trick here. Even with a relatively slow I/O 
device (I was faintly amused to see SSD in the list of relatively slow 
devices, if SSD is slow what is fast?) it should eventually catch up 
UNLESS something is generating an insane amount of I/O over a sustained 
period. Just browsing the web shouldn't do that unless RAM is very 
tight, and the O/P indicated they have lots of RAM.

I run my machine here with 24GB RAM and part of my filesystem is on an 
external USB hard drive cage. From reading this thread you'd think that 
when I run data analyses reading and writing that external drive cage, 
it would be a recipe for this bug, but it isn't. And that is because 
those processes do a lot of work and make the CPU work hard, but they 
don't do insane amounts of I/O. (lots, but not insane amounts)

So, I think the O/P should look into what is causing all the I/O in the 
first place, and why that I/O is sustained even when most of the 
processes on the system are blocked. Something isn't right there. The 
usual suspect would be swapping but again the O/P said they have 
"large-capacity RAM" and were just browsing the web with or without 
LibreOffice open -- this shouldn't trigger swapping.

Mark



Re: setting up a drive automount in systemd?

2018-05-17 Thread Mark Fletcher
On Thu, May 17, 2018 at 11:36:01AM +0100, Darac Marjal wrote:

> However, before you go doing that, consider that systemd ALSO comes with a
> program called "systemd-fstab-generator". Contrary to its name, this
> generates unit files FROM an fstab (rather than generating an fstab).
> Therefore, following the Principle of Least Surprise, the current thinking
> is to continue to maintain your mountpoints in /etc/fstab and let systemd do
> the translation on the fly.
> 
> In summary, then, while you CAN run systemd without /etc/fstab, that file is
> still recommnded as the expected configuration file for mountpoints.
> 

That has the ring of good advice. Especially since there could easily be 
other programs on your system that expect to find information about the 
mountpoints on the system by looking in fstab -- it's a file on the 
system, it's a long-standing standard, people are allowed to look at it 
and it's not unreasonable to imagine some piece of software will.

Mark



Re: GPG error when trying to update Lenny

2018-05-17 Thread Mark Fletcher
On Wed, May 16, 2018 at 03:20:09PM +, Marie-Madeleine Gullibert wrote:
> Hello to all, 
> 
> I'm relatively new to Debian. I'm helping out a small organization that has a 
> library server installed on Debian to update their system. They run currently 
> on Debian lenny so I'm first trying to upgrade the Debian system, but I keep 
> running into a GPG error when I try to first update. I've tried many things 
> but none have worked so far, and would gladly welcome any suggestions. I do 
> have debian-archive-keyring installed (and up to date) and I've tried 
> retrieving my expired keys from a two different keyservers to no avail. 
> 
> Here's what happens (I'm running as root): 
> 
> localhost:~# apt-get update
> Get:1 http://archive.debian.org lenny Release.gpg [1034B]
> Ign http://archive.debian.org lenny/main Translation-en_US
> Get:2 http://archive.debian.org lenny/updates Release.gpg [836B]
> Ign http://archive.debian.org lenny/updates/main Translation-en_US
> Ign http://archive.debian.org lenny/updates/contrib Translation-en_US
> Get:3 http://archive.debian.org lenny/volatile Release.gpg [481B]
> Ign http://archive.debian.org lenny/volatile/main Translation-en_US
> Hit http://archive.debian.org lenny Release
> Hit http://archive.debian.org lenny/updates Release
> Hit http://archive.debian.org lenny/volatile Release
> Get:4 http://archive.debian.org lenny Release [99.6kB]
> Get:5 http://archive.debian.org lenny/updates Release [92.4kB]
> Ign http://archive.debian.org lenny Release
> Get:6 http://archive.debian.org lenny/volatile Release [40.7kB]
> Ign http://archive.debian.org lenny/updates Release
> Ign http://archive.debian.org lenny/volatile Release
> Ign http://archive.debian.org lenny/main Packages/DiffIndex
> Ign http://archive.debian.org lenny/main Sources/DiffIndex
> Ign http://archive.debian.org lenny/updates/main Packages/DiffIndex
> Ign http://archive.debian.org lenny/updates/contrib Packages/DiffIndex
> Ign http://archive.debian.org lenny/updates/main Sources/DiffIndex
> Ign http://archive.debian.org lenny/updates/contrib Sources/DiffIndex
> Ign http://archive.debian.org lenny/volatile/main Packages/DiffIndex
> Ign http://archive.debian.org lenny/volatile/main Sources/DiffIndex
> Hit http://archive.debian.org lenny/main Packages
> Hit http://archive.debian.org lenny/main Sources
> Hit http://archive.debian.org lenny/updates/main Packages
> Hit http://archive.debian.org lenny/updates/contrib Packages
> Hit http://archive.debian.org lenny/updates/main Sources
> Hit http://archive.debian.org lenny/updates/contrib Sources
> Hit http://archive.debian.org lenny/volatile/main Packages
> Hit http://archive.debian.org lenny/volatile/main Sources
> Fetched 235kB in 0s (301kB/s)
> Reading package lists... Done
> W: GPG error: http://archive.debian.org lenny Release: The following 
> signatures were invalid: KEYEXPIRED 1520281423 KEYEXPIRED 1337087218
> W: GPG error: http://archive.debian.org lenny/updates Release: The following 
> signatures were invalid: KEYEXPIRED 1356982504
> W: GPG error: http://archive.debian.org lenny/volatile Release: The following 
> signatures were invalid: KEYEXPIRED 1358963195
> W: You may want to run apt-get update to correct these problems
> 

Since you want to upgrade the installation to a later version, my 
suggestion is don't bother first trying to update Lenny. Just update 
your sources.list to the next release (was that Jessie? I don't even 
recall) and then update as usual.

Some releases had recommendations to use aptitude / not use aptitude, as 
opposed to apt-get, to do the update, I don't recall now if releases 
after Lenny did, but hopefully this comment will trigger someone else 
who does remember to chime in. Google may still be able to find old 
copies of the upgrade guides that are published with each new Debian 
release.

The only other piece of advice I have is don't try to go straight to 
stretch or buster from lenny -- instead upgrade one major release at a 
time, as that path is better trodden and more likely to work, and any 
issues you encounter are more likely to have been well-discussed in 
places Google can find (including the archives of this list).

Mark



Re: openvpn client DNS security

2018-04-05 Thread Mark Fletcher
On Thu, Apr 05, 2018 at 11:48:51AM +0200, Roger Price wrote:
> Hi, I had a problem setting up DNS on an openvpn client.  I'll describe it
> here before submitting a bug report - I would appreciate comment on the
> security aspects.
> 

> 
> Looking more closely at script /etc/openvpn/update-resolv-conf, it begins
> with the line
> 
>  [ -x /sbin/resolvconf ] || exit 0
> 
> File /sbin/resolvconf is not present, because package resolvconf is not a
> prerequisite for openvpn, so the script fails silently!  This looks to me
> like a serious security problem.  Joe Road-Warrior is out there, connected
> to the "free" Wifi.  He follows corporate instructions to turn on his
> openvpn client, but because of the exit 0 he is still using the local
> thoroughly compromised DNS server.
> 

apt-cache rdepends resolvconf shows a dependency of openvpn on 
openresolv, which according to apt-file provides /sbin/resolvconf (and 
also, if I am reading apt-cache output correctly, depends on 
resolvconf...)

I can only assume one of the dependencies in that stack is a "suggests" 
rather than a "depends". If you are going to report a bug probably worth 
acknowledging this so you don't get turned away at the door.

... Yep, checking apt show openvpn, resolvconf is indeed a "suggests".

Mark



Re: apt{-cache,-get,itude} show wrong version of package after update

2018-04-05 Thread Mark Fletcher
On Wed, Mar 28, 2018 at 09:31:11AM +0200, to...@tuxteam.de wrote:
> On Wed, Mar 28, 2018 at 07:47:05AM +0900, Mark Fletcher wrote:
> 
> [...]
> 
> > I'm not sure if you really did what it sounds like you did here, but if 
> > you did... you can't mix and match commands to apt-get and aptitude.
> 
> I think this is false, at least in such an unrestricted and
> sweeping way. Apt (and apt-get, its younger cousin) and aptitude
> are just front ends to dpkg and use the same data bases in
> the background.
> 
> In particular...
> 
> > You did apt-get update so you need to use apt-get upgrade, or 
> > dist-upgrade, or whatever the apt-get command is
> 
> ...apt update and apt-get update are equivalent (as most
> probably aptitude update is).
> 

It wasn't apt and apt-get that were being compared though, it was 
aptitude and apt-get. And there _is_ some sort of difference between 
those two such that you have to update with the right one; I'm sure I've 
seen discussion of that on this forum before (I don't have links 
though).

> > (I don't much use 
> > apt-get, have switched to the apt command since upgrading to stretch).
> 
> Apt is just a friendlier front-end for apt-get: the command
> outputs are not compatible (and you'll see a warning to that
> effect in apt, aimed at those who want to use apt's output
> in scripts), and aptitude has, AFAIK, some *extra* databases
> to record user intention, and a different dependency resolver,
> but the basic data sets (which packages are available, what
> state each is in, etc.) are common.

See above. The only person who mentioned apt was me, and even then only 
in the context of that's what I use nowadays. The OP never mentioned apt.

In any case, those "extra databases" are probably a pretty good reason 
not to mix and match front-ends in quite the way the OP was doing, even 
if it doesn't immediately lead straight to trouble trying to get one's 
system updated properly in the way I suggested it might.
> 
> > If you want to use aptitude upgrade, or dist-upgrade, or safe-upgrade, 
> > or whatever the command is (embarrassingly I have forgotten, I used 
> > aptitude for years _before_ upgrading to stretch) you need to first do 
> > aptitude update.
> > 
> > apt-get update followed by aptitude upgrade will lead to pain.
> 
> I don't think so: but I'm ready to be proven wrong!
> 

Certainly I have no proof except my experience and my (patchy) memory 
that I have seen discussion of this point on this list before.

Anyway the actual issue in this case turned out to be nothing to do with 
mixing and matching front-ends to dpkg. Glad the OP got his problem 
figured out.

Mark



Re: apt{-cache,-get,itude} show wrong version of package after update

2018-03-27 Thread Mark Fletcher
On Tue, Mar 27, 2018 at 07:50:03PM +0200, Jean-Baptiste Thomas wrote:
> After apt-get update, attempting to install ntp tries to
> download version 1:4.2.8p10+dfsg-3+deb9u1 and fails. It tries
> to download +deb9u1 because
> 
>   $ aptitude show ntp
>   Package: ntp
>   Version: 1:4.2.8p10+dfsg-3+deb9u1
>   State: not installed
>   [...]
> 
> and it fails because the version of the package in the Debian 9
> mirror listed in /etc/apt/sources.list is +deb9u2 :
> 
>   ntp_4.2.8p10+dfsg-3+deb9u2_amd64.deb
> 
> I don't understand what went wrong. apt-get update seemed to go
> well, only complaining about missing "DEP-11 64x64 Icons", which
> are presumably not a vital part of ntp.
> 
> How is this possible ? I'm confused.

I'm not sure if you really did what it sounds like you did here, but if 
you did... you can't mix and match commands to apt-get and aptitude.

You did apt-get update so you need to use apt-get upgrade, or 
dist-upgrade, or whatever the apt-get command is (I don't much use 
apt-get, have switched to the apt command since upgrading to stretch).

If you want to use aptitude upgrade, or dist-upgrade, or safe-upgrade, 
or whatever the command is (embarrassingly I have forgotten, I used 
aptitude for years _before_ upgrading to stretch) you need to first do 
aptitude update.

apt-get update followed by aptitude upgrade will lead to pain.

Hope that helps

Mark



Re: Password Manager opinions and recommendations

2018-03-26 Thread Mark Fletcher
On Mon, Mar 26, 2018 at 08:34:28PM +0100, Brian wrote:
> On Sun 25 Mar 2018 at 22:43:26 +0200, Ángel wrote:
> 
> > On 2018-03-25 at 19:47 +0100, Brian wrote:
> > > 1 day after the breach your data had been compromised. Changing your
> > > password 10 days later on in your 1 month cycle doesn't seem to me to
> > > be reactive security. Better than nothing, I suppose, but closing the
> > > door after etc.
> > > 
> > > In any case, your 20 character, high entropy password was your ultimate
> > > defence. (Not unless Yahoo! didn't hash).
> > 
> > 
> > Sure. If someone stole your password, be that by compromising and
> > injecting a password-stealing javascript server side, due to a sslstrip
> > you didn't notice on that free wifi, perhaps just someone looking at the
> > keys you pressed when entering your password, etc. the data you had up
> > to that point in that service should be considered compromised.
> > 
> > However, if the password was changed N days/months later, as part of a
> > periodic password change, that would mean that data processed after that
> > date would no longer be in risk, whereas otherwise the account would
> > continue being accessible by the bad actors for years (assuming that you
> > are not using a pattern that removes the benefit or rotating the
> > password!).
> 
> I would be more accepting of this argument if it fitted with real world
> examples in other fields. Nobody offers the advice to change the locks
> on your front door or your car at regular intervals. But the computer
> security business has conjured up the "what if" argument to counteract
> commensense.
> 
It's pretty difficult to steal someone's keys without them realising it 
has happened. In contrast, password compromise happens without the 
victim's knowledge all the time.

Mark



Re: Debian 9 rocks, really

2018-03-26 Thread Mark Fletcher
On Mon, Mar 26, 2018 at 10:06:17AM +0200, Mart van de Wege wrote:
> Andre Rodier  writes:
> 
> > Hello all,
> >
> > I have been using Linux since more than 20 years, and Debian Linux
> > since Potato.
> 
> Same here. I started out on Red Hat 6.2, and discovered Debian when it
> was on potato. I've been using some flavour of Debian personally since,
> and some flavour of it or RH professionally.
> 
> I love it. It's been great consistently, and 9 really shines. I even
> like systemd although I have some reservations about its design (I think
> it's a bit over-engineered).
> 
> Debian 9 give me dev tools, and tools to manage service resources better
> than ever. It's a lovely base system.
> 

#metoo , in a good way!

I started with Debian in 1996 -- the lovely Stephen Early, whom I 
occasionally see on this list, may have "fond" memories of porting the 
source code for the 1.3 development kernel onto my machine on floppy 
disks so he could help me get my brand spanking new ethernet card 
working! Yeah, probably not that fond memories...

In those days I dual-booted Windows and Debian. I was away for a few 
years after uni and then had a brief affair with SuSE but we don't 
talk about that in polite company. Came back to Debian when Woody was 
stable and been running Debian as the only OS on the box ever since. (I 
still run Windows but only in VMs, very much a minority use case for me 
now)

Over the years I have oscillated between the stable distribution and 
whatever was testing at the time. Now I run both, stretch on machines 
that are doing important stuff and buster to see what is coming.

As others have said, the combination of the philosophy, the dedication 
of the team, and the community make this a great way to spend one's 
computer's time. I love it.

Mark



Re: quick scripting 'is /P/Q mounted'

2018-03-14 Thread Mark Fletcher
On Tue, Mar 13, 2018 at 03:56:00PM -0400, The Wanderer wrote:
> On 2018-03-13 at 15:39, Joe wrote:
> 
> > On Tue, 13 Mar 2018 14:49:56 +0100  wrote:
> 
> That test can be spoofed, however, by the creation of a directory with
> the same name (and/or other characteristics) under the mount point while
> the mount is not active.
> 

Yes, but in most use cases one would not be worried about malicious 
actions, you are trying to protect against cock-ups.

> Even if you don't think anything malicious is ever going to try to spoof
> this in whatever case is at hand, can you be sure no script (or, for
> that matter, user) will ever attempt to create that directory under the
> mistaken impression that the mount is active?
> 

Yeah, that's a fair point though.

Mark



Re: quick scripting 'is /P/Q mounted'

2018-03-13 Thread Mark Fletcher
On Tue, Mar 13, 2018 at 08:49:58PM +1100, David wrote:
> On 13 March 2018 at 14:40, Mike McClain  wrote:
> >
> > If my other computer is South40 and I want to mount South40's /docs
> > on my /south40/docs/ directory I can do that. As one script calls
> > another I want to know if I need to mount South40 without
> > $( mount | grep 'south40/docs').
> >
> > Suggestions?
> 
> Installing the package util-linux will provide the mountpoint command
> which exits true=0 if its argument is in use as a mountpoint. Example:
> 
> $ if mountpoint / ; then echo "exit status is $?" ; fi
> / is a mountpoint
> exit status is 0
> 
Unless I've misunderstood the question, you can tell if something is 
mounted at a mount point by checking if anything is present under the 
mount point, eg if you know there is a directory /Y that gets mounted 
under mount point /X, you can can make sure /X/Y doesn't exist under the 
mount poiunt and then check for the existence of /X/Y -- it will be 
there if the mount point is in use and not if not.

Mark



Re: can't install mplayer of stretch

2018-03-04 Thread Mark Fletcher
On Sun, Mar 04, 2018 at 10:01:23PM +, Long Wind wrote:
> there is a typo in my last post:  not -> now
> the cause might be i use installation CD 9.3.0but now apt source is newest 
> (ftp.utexas.edu) 
> 
> On Sunday, March 4, 2018 4:55 PM, Long Wind  wrote:
>  
> 
>  the cause might be i use installation CD 9.3.0but not apt source is newest 
> (ftp.utexas.edu)
> below is error msg of "apt-get install mplayer"is it possible to fix it? 
> Thanks!
> 
> Reading package lists...
> Building dependency tree...
> Reading state information...
> Some packages could not be installed. This may mean that you have
> requested an impossible situation or if you are using the unstable
> distribution that some required packages have not yet been created
> or been moved out of Incoming.
> The following information may help to resolve the situation:
> 
> The following packages have unmet dependencies:
>  mplayer : Depends: libavcodec57 (>= 7:3.2.2) but it is not going to be 
> installed or
> libavcodec-extra57 (>= 7:3.2.2) but it is not going to be 
> installed
>Depends: libavformat57 (>= 7:3.2.2) but it is not going to be 
> installed
>Depends: libswresample2 (>= 7:3.2.2) but it is not going to be 
> installed
> E: Unable to correct problems, you have held broken packages.
> 
> 
>
Do apt update && apt upgrade as root, and see what it wants to do. You 
don't have to let it do it -- you can answer N when it asks for 
confirmation on the upgrade -- but you can see if it wants to upgrade a 
ton of packages. If it does, my suggestion would be to let it do so and 
then try the installation again.

The issue could be that it still thinks something is in the repository 
with a dependency on an older version of one of those libraries, and you 
need to update its understanding for it to see how it can fulfill your 
request to install mplayer.

If that doesn't help then you need to look closely at the libraries 
mplayer depends on and see if you have something pinning an older 
version of them or a conflict preventing them from being installed. I 
don't recall if apt has a why / why-not command, but aptitude does -- 
one early step would be to let the system tell you why it can't install 
the dependencies of mplayer. Note your error message complained about 
all of them, but it only takes one to actually have a problem, so check 
them all.

HTH

Mark



Re: Strange Loss of Synaptic Functionality

2018-02-10 Thread Mark Fletcher
On Sat, Feb 10, 2018 at 08:10:18AM -0500, Stephen P. Molnar wrote:
> 
> Thanks for the reply.
> 
> root@AbNormal:/home/comp# apt-get clean
> root@AbNormal:/home/comp# 
> 
> Just before I read your reply I edited sources-list for a different
> source, debian.uchicago.edu father than debian.org, and the synaptic
> function was partially restored.
> 
> # deb cdrom:[Debian GNU/Linux 9.3.0 _Stretch_ - Official amd64 DVD
> Binary-1 20171209-12:11]/ stretch contrib main 
> 
> # deb cdrom:[Debian GNU/Linux 9.3.0 _Stretch_ - Official amd64 DVD
> Binary-1 20171209-12:11]/ stretch main contrib 
> 
> deb http://debian.uchicago.edu/debian/ stretch contrib non-free main 
> deb-src http://debian.uchicago.edu/debian/ stretch contrib non-free
> main 
> 
> # deb http://security.debian.org/debian-security/ stretch/updates main
> contrib non-free  
> # deb-src http://security.debian.org/debian-security/ stretch/updates
> main contrib non-free  
> 
> # stretch-updates, previously known as 'volatile'
> deb http://debian.uchicago.edu/debian/ stretch-updates non-free contrib
> main 
> deb-src http://debian.uchicago.edu/debian/ stretch-updates non-free
> contrib main 
> 
> # stretch-backports, previously on backports.debian.org
> deb http://debian.uchicago.edu/debian/ stretch-backports non-free
> contrib main 
> deb-src http://debian.uchicago.edu/debian/ stretch-backports non-free
> contrib main 
> 
> For some reason synaptic didn't like Debian security entries, but
> worked when they were commented out.  However, I am not happy about not
> being able to receive security updates.  Are those entries correct?
> 

I think the answer to that is no. On Stretch here I have 

deb http://security.debian.org/ stretch/updates main contrib non-free
deb-src http://security.debian.org/ stretch/updates main contrin non-free

ie no /debian-security

Mark



Re: Causes, cures and prevention of orphaned inodes?

2018-02-04 Thread Mark Fletcher
On Sun, Feb 04, 2018 at 03:49:36PM -0500, Stephen P. Molnar wrote:
> I am running Debian Stretch on am eight thread AMD GPU platform.
> Lately, it seems if I have been plagued by surfeit of orphaned nodes.
> 
> I have goggled the causes. cures and prevention, but have gotten no
> results that make any sense to me. I've been using computer since the
> early 1960's but am an organic chemist by training and experience, not
> a hardware expert.
> 

The problem may not be hardware. From reading the details you provided, 
it looks like you are using ext4 filesystems on your disks. Is that 
correct? We occasionally get people on here reporting problems with more 
esoteric / exotic file systems (cue the cries of protest from various 
corners that super-duper-dijeridoo-fs isn't exotic, and that I'm a 
dinosaur) but ext4 is in very wide use and as far as I know, stable.

Anyway worth confirming what filesystem(s) is/are actually on the disks 
where orphaned inodes are occurring. If it is something more unusual, 
you might have found a bug in the filesystem. Also, do you use 
encryption on your disks eg LUKS?

Just a couple of thoughts

Mark



Re: Network setup by installer

2018-01-25 Thread Mark Fletcher
On Sat, Jan 20, 2018 at 10:59:31AM +, Brian wrote:
> 
> The technique just replaces installing over a wireless link from the
> start. I've been wondering why you chose not to do that and avoid the
> extra work.

Sorry for the delay in replying. An early draft of one of my previous 
mails in this thread pre-emptively addressed that question, but I have 
been fairly aggressively self-editing on this thread so as not to 
attract the ire of those who are triggered by being expected to read 
more than a few lines of text.

I had 2 reasons (no claim is made that they were good ones) not to do 
the install via WiFi in the first place -- firstly I wasn't sure if the 
installer would be able to use my WiFi out of the box before trying it 
(eg firmware in contrib or non-free or something) and didn't want the 
hassle of getting into an install and finding it falling over -- I have 
this probably irrational desire to see the install go smoothly first 
time. Probably been working in a corporate IT environment too long.

Secondly I knew the net install would be doing a lot of downloading and 
believed (possibly erroneously) that it would be faster over a wired 
connection, especially if the wireless connection out of the box were in 
any way compromised eg old / dodgy / incorrectly selected firmware. 
The wireless part of my home netwxork is moderately populous, the wired 
part much less so.

Mark



  1   2   3   4   5   >