Re: Ubuntu - Logs out or Crashes

2024-03-09 Thread Craig Sanders via luv-main
Also, disable animations and other bling in your desktop. and apps to
(many have options to disable stuff like that). They might be pretty and
entertaining the first few times you see them, but they just waste resources
(like RAM and CPU power) doing stuff that isn't necessary.

If you're using KDE or Gnome, it may also be worth considering switching
to a lightweight desktop like xfce or lxde. This doesn't have to mean
switching distros - any mainstream (i.e. non-tiny) distro can run any
desktop environment, even if the default/recommended DE is one of the
heavy-weights. You can even have multiple DEs installed so you can choose
between them whenever you login.



Finally, if you're using a lot of snap packaged software, you may want to look
for .deb packages instead. snaps & flatpaks & the like are convenient but they
do use more RAM (because they have their own bundled copies of the libraries
they need which run in their own namespace rather than using the same shared
libs as the rest of the system - so if you're running multiple snap apps, you
end up having multiple copies of various libs in RAM at the same time)

e.g. steam is available as both a .deb package AND as a snap for ubuntu. the
.deb package will use less RAM.

Ubuntu have been heavily pushing snap packages for the last few years -
installing them in preference to any .deb packages that may also be available.
I recommend closely examining the programs you use and checking whether you
can save some RAM by using .deb packages for some of them instead. You may
have to search for third party .deb PPAs for some packages.


craig
___
luv-main mailing list -- luv-main@luv.asn.au
To unsubscribe send an email to luv-main-le...@luv.asn.au


Re: Ubuntu - Logs out or Crashes

2024-03-09 Thread Craig Sanders via luv-main
On Fri, Mar 08, 2024 at 03:31:38PM +1000, Piers Rowan wrote:
> On 8/3/24 10:52, Craig Sanders via luv-main wrote:
> > 16GB isn't a lot these days.  My guess is you're most likely running out
> > of RAM.  The best solution is to add more RAM to the system if possible.
>
> That is not possible with this model. The strange thing is that this

That's a shame.  Is the memory soldered in? if not, maybe there's a
third-party RAM kit that might fit?

Otherwise, do you have a desktop PC as well? Maybe you could offload some of
the workload to that, accessing it via ssh.

BTW, you can even tunnel X applications over ssh - I wouldn't want to use a
web browser or watch videos this way, but it should be fine for editing code
and probably for LibreOffice Writer & Calc too. I use ssh like this quite
often, in particular whenever I need to scan documentes with xsane because my
scanner is connected to my home server rather than my workstation machine.

Anyway, using ssh like this can work quite well if you're at the same location
(e.g. home) but can be inconvenient if you need to travel between multiple
locations (maybe a VPN could help, or at least static IP addresses).

As Sun used to say back in the 80s and 90s, "The Network is the
Computer". Using ssh like this (or it would have been rsh at the time) is
partly what they meant by that.  Obviously, the faster & lower latency the
network, the better - a wired 1Gbps or 2.5Gbps LAN would be better than
wireless but even wireless is fine if there's not too many other WLANs nearby
competing for bandwidth.

> behavior is recent. Running applications concurrently has never been an
> issue. Unless there is something bloaty about an update this issue should
> never happenbut.

Unless you've drastically changed what you're doing lately, this is probably
just the "natural evolution" of software over time.  Software tends to be
updated to do more stuff (to use a highly technical term), and doing more
stuff requires more resources - RAM in particular. Apps get bigger, and
the libraries they depend upon get bigger too. And software under rapid
development tends to oscillate between rapid growth spurts of new features and
experiments with little concern for optimisation followed by consolidation and
optimisation and bug fixes.


> What I noticed was that the system chewed away at the swap to the tune of
> 2GB (its Max).
> 
> Iv'e increased the swap to 4GB:
> 
> webgen@webgen-01:~$ sudo swapon -s
> Filename                Type        Size        Used Priority
> /swapfile   file        4194300 0        -2

I'd be inclined to increase that to at least 8GB if you have the disk space
available for it.  And watch it closely for a while, just to see how close
your system gets to filling it up.

Also watch to see if the swap usage fluctuates all the time (indicating stuff
is being constantly swapped in and out - it's possible that this may be why
you've used so much of the drive's rated write endurance in only a few years)
or whether it settles down and mostly stabilises after a while.


BTW, in another message you were talking about replacing the NVME drive. it's
worth mentioning that upgrading or replacing a drive doesn't have to require
a complete re-install.  You can just copy one drive to another (either a
bit-wise copy using dd or similar, or a file copy using e.g. rsync or tar or
cp -af), and optionally use gparted to adjust the size of your partition(s) if
necessary.  If your laptop doesn't have two NVME slots, you could back it up
to an external hard disk, e.g. using a bootable USB stick of Clonezilla, and
then restore it to the new NVME - this is probably the easiest way...even if
you have two NVME slots, Clonezilla is a convenient way to clone a system from
one drive to another.

If you're using UEFI to boot, it should just work because UEFI looks for the
FAT32 "EFI System" partition.

If you're using old-fashioned style MBR to boot then you'll probably need
to re-install the grub boot sector on the new disk - you can do that from
the clonezilla USB stick or some other "rescue" disk. you'll need to mount
your partition(s) into, e.g., /target then bind mount /proc, /sys, /dev under
/target, then "chroot /target" before running grub-install. then exit from the
chroot and unmount them.  Clonezilla may do this for you automatically when
you clone or backup & restore an entire drive...or it may not. Can't remember.



> I am currently running:
>
> - Terminal
> - GIMP

Terminal probably doesn't use that much RAM, unless you have it set to
keep tens of thousands of lines of backscroll buffer.  GIMP can use a lot,
depending on what you're doing with it - proportional to the size and
complexity of your artwork.

> - Visual Code

This is microsoft's code editor, right?  If so, it's probably bloatware -
MS aren't noted f

Re: Ubuntu - Logs out or Crashes

2024-03-07 Thread Craig Sanders via luv-main
On Tue, Mar 05, 2024 at 05:16:50AM +1000, Piers Rowan wrote:
> > > Lenovo Lenovo Yoga C740-14IML (Touch screen)
> > > 16.0 GiB / Intel® Core™ i7-10510U CPU @ 1.80GHz × 8
>
>[...]
>
> I just had a crash and managed to 'top' as it was going down. There was a
> process  fossilize_replay which is compiling shaders for a game (power went
>
>[...]
>
> I haven't changed anything and normally run all my applications
> concurrently: 2x code editors, Chrome, TB, console, Virtual Box and desktop
> apps (writer/calc).

16GB isn't a lot these days.  My guess is you're most likely running out of
RAM.  The best solution is to add more RAM to the system if possible.

Adding more swap space (a swap file will do if you can't repartition the
drive) can reduce the risk of out-of-memory crashes due to temporary
short-term spikes in usage (perhaps from things like fossilize_replay). Note
that adding more swap is a cheap, easy bandaid - it's worth doing but it
doesn't actually solve the problem. You won't want to be using a system that's
constantly swapping stuff in and out of RAM.

Also, try to reduce memory usage.  Start by not running everything at once if
you don't actually need to. e.g. don't run steam and your VM and Chrome at
the same time (i.e. if you're about to play a game, quit from everything not
needed by steam or that game), and don't leave steam running when you're not
about to play something.

Maybe reduce the amount of RAM allocated to your VM(s).

Chrome is a huge memory hog.  Try not to have too many windows/tabs open at
once.  Chrome's memory usage can be reduced by using an extension like The
Marvellous Suspender, which puts tabs to sleep when they've been idle for a
while.

https://chromewebstore.google.com/detail/the-marvellous-suspender/noogafoofpebimajpfpamcfhoaifemoa

If one or more of your code editors is Atom or some other chrome / electron
based app then it will also have huge RAM requirements (partly from being
based on chrome, part from being a shitty javascript app).  Replace it with a
native app. Maybe learn to use vi or emacs.

Ditto for stuff like the Discord app if you use that, which is also just
chrome bundled up with some javascript pretending to be a real app.

craig

--
craig sanders 
___
luv-main mailing list -- luv-main@luv.asn.au
To unsubscribe send an email to luv-main-le...@luv.asn.au


Re: Email accounts which I have used for years are now bouncing mail

2023-05-28 Thread Craig Sanders via luv-main
On Sun, May 28, 2023 at 05:29:13PM +1000, Tony White wrote:
> Add a new record to your Zone. Look for TXT record.  Leave the
> domain empty select TXT from the drop down list.  In the filed
> to its right insert the value
>
> -- snip ---
> v=spf1 ip4:203.170.84.161 ~all
> -- end snip --

That should be "-all", not "~all" (a minus sign, not a tilde).

Also, while that IP address is correct for mail.algphoto.com.au (which is the
MX for the domain, so is the designated address for the server **receiving**
mail for that domain), it's not necessarily the IP address of the client host
sending the email.  The SPF TXT record should list **ALL** addresses that can
legitimately claim to be sending mail from the algphoto.com.au domain.

If mail from algphoto.com.au is only ever sent from 203.170.84.161 then that
SPF TXT record is fine.  If not, either configure client machines to relay
through that server (with appropriate authentication as required) or add other
required addresses to the SPF record.

> change the TTL value to 3600 then save.
>
> The DNS will be updated in an hour or so.

No, it won't.  The zone will be updated as soon as it has been edited.  Other
DNS resolvers may have your some of your records cached due to prior queries,
and it will take time (whatever's left of the TTL since the last query was
made) for those cached entries to expire.

Changing the TTL will only affect lookup requests by DNS resolvers that
haven't already cached any of your DNS records.

Setting the TTL to anything less than a day (86400 seconds) or half a day is
not recommended for normal operation.  This will just make other servers query
your domain more often, defeating the purpose of a cache.

A fairly common procedure when making significant changes to a domain is to
set the TTL to a low value, say 5 minutes, *before* changing it, then waiting
for the old TTL to expire (e.g. if it was set to 86400, you'll need to change
the TTL at least a day before your changing your domain), and then make the
changes.  Later, set it back to 86400 or longer.  This is really only needed
when moving your NS or MX hosts, or when moving to another ISP (if you don't
own your own IP addresses).

Or just go ahead and edit your domain and don't worry about the TTL (for
changing an SPF record, I wouldn't worry about it).  The only real use for
lowering the TTL is to make it easier for you to quickly fix any mistakes you
might make, as they won't be cached for long.

Worth noting: the cache TTL for NXDOMAIN negative responses (i.e. when
requesting a record that does not exist) is usually much shorter than the TTL
for a positive response, typically anywhere from 30 seconds to 15 minutes,
rather than a day.  So if the domain didn't previously have a TXT record, it
won't be cached for long anyway.


Also note that many DNS resolvers completely ignore domain TTLs and just
cache the records (positive or negative or both) for however long they want,
sometimes days or even weeks.  This is broken but unfortunately common.
It's also arguable that whoever owns those DNS servers have every right to
configure the caching on their servers however they like.  There's nothing you
can do about it, anyway, because the caching happens on servers you have no
access to or control over.

craig

--
craig sanders 
___
luv-main mailing list -- luv-main@luv.asn.au
To unsubscribe send an email to luv-main-le...@luv.asn.au


Re: Postfix - virtual_alias - not working

2023-04-21 Thread Craig Sanders via luv-main
On Fri, Apr 21, 2023 at 02:14:09PM +1000, Piers Rowan wrote:
> --/etc/postfix/main.cf:
> virtual_alias_maps = hash:/etc/postfix/virtual
> virtual_alias_domains = ngungee.com inertiagh.com.au sunnycoastjobs.com
>
>
> /etc/postfix/virtual:
>
> # IGH Info Account
> i...@inertiagh.com.au    info_ingh
>
> ~~~
>
> After making changes (via Webmin) nothing happens. After running:
>
> postmap /etc/postfix/virtual
>
> Nothing happens.
>
> The messages still end up in a different mail box to info_ingh
>
> Any ideas on this?

You need a line in /etc/postfix/virtual with just the domain on the LHS
and anything at all on the RHS (I use the word "VIRTUAL", but you can use
anything).

e.g.

inertiagh.com.auVIRTUAL


Run `man 5 virtual` for full details about the virtual file format.

In part, it says:

  | The "virtual-alias.domain anything" entry is required for a virtual alias
  | domain. Without this entry, mail is rejected with "relay access denied",
  | or bounces with "mail loops back to myself".



BTW, the RHS of the actual `user@domain` entries in a virtual_alias map is an
email address (either a local address or with a fully qualified domain name),
not a file.  That means if you want to map it directly to a file, you'll also
need an entry in /etc/aliases:

info_ingo /path/to/filename

mail for i...@inertiagh.com.au would be received by postfix, delivered to
local address info_ingh, which happens to be an alias with a file path to an
mbox file (the pathname has to be writable by postfix).


Alternatively, send all (or most) mail for that domain to a particular user
with a .procmailrc file to save incoming mail to mbox files or maildir
directories according to the rules in that file (or similar rules-based
local-delivery agent).



craig

--
craig sanders 
___
luv-main mailing list -- luv-main@luv.asn.au
To unsubscribe send an email to luv-main-le...@luv.asn.au


Re: Windows on a Linux kernal ?

2023-04-01 Thread Craig Sanders via luv-main
On Mon, Mar 27, 2023 at 10:52:04AM +1100, Rohan McLeod wrote:
> Just out of curiousity I 'googled'
>
> +windows on a +"linux kernal" ? +2023
>
> Rather irritatingly it ignored the "+" but [...]

maybe because "kernal" doesn't match a lot because it's a spelling error? does
google auto-correct spelling on searches for double-quoted phrases? or does it
assume you're deliberatly searching for errors? dunno.

"windows on linux kernel" might get more relevant hits. maybe not.

BTW, google pretty much ignores small words like "on", "a", "the". at least
for unquoted strings. I think it *may* treat them as a slight preference for
exact-match strings when quoted.

craig
___
luv-main mailing list -- luv-main@luv.asn.au
To unsubscribe send an email to luv-main-le...@luv.asn.au


Re: Change a file name - remove a consistent string recursively

2023-01-16 Thread Craig Sanders via luv-main
On Tue, Jan 17, 2023 at 12:01:35AM +1100, Les Kitchen wrote:
> On Mon, Jan 16, 2023, at 21:42, Craig Sanders via luv-main wrote:
> > On Fri, Jan 13, 2023 at 10:39:02PM +1100, Les Kitchen wrote:
> >> I'd do something like:
> >>
> >> find /Dir1 -type f | perl -lne '$o=$_; s/\.junk\././; print("mv -i $o $_") 
> >> if $_ ne $o;'
>
> Thanks, Craig, for your followup.
>
> > This is quite dangerous, for several reasons.  To start with, there's no
> > protection against renaming files over existing files with the same target
> > name.
> ...
>
> Well, that's the intention of the -i (interactive) option to mv,
> to require user agreement before over-writing existing files.

Which gets tedious real fast with more than a few files to confirm.


> All the other points you raise are valid, especially the dangers of feeding
> unchecked input into the shell, and anybody writing shell code needs to be
> aware of them — although I will say I mentioned pretty much all of them in
> the notes further down in my message, though in less detail than you have,
> and without stressing enough the dangers.

Yeah, i noticed that - I just thought it needed to be emphasised and explained
in more detail. These issues are the source of a lot of really serious bugs in
shell scripts & one-liners.


> And, yes, if you have filenames with arbitrary characters, then you have to
> resort to other contrivances, ultimately to NULL-terminated strings.

Using NUL as the separator isn't a "contrivance".  It's standard good practice
- use a delimiter that ISN'T (and preferably CAN'T be) in your input.  Since
NUL is the only character that can't be in a path/filename, that's the only
character to use.  It works whether you've got annoying characters in the
filenames or not. No special cases or special handling required. It just
works.


> And, yes, if you have a huge number of files, then you'd likely want to do
> the rename internal to your scripting language, instead of forking a new
> process for each file rename.  But then you lose the easy ability to review
> the commands before they're executed.

It's not difficult to change a print() statement to a rename() statement, or
to have both and comment out the rename until you've verified the output (i.e.
a simple "dry-run").

> And I could also mention the potential for filenames to contain UTF-8
> (or other encodings) for characters that just happen to look like ASCII
> characters, but aren't, or to contain terminal-control escape sequences.  It
> can get very weird.

While there's a handful of problematic unicode characters (mostly the extra
whitespace characters), in general unicode is not a problem. Especially if
you use NUL and/or proper quoting and/or arrays (e.g. `find` in combination
with the bash/ksh/zsh builtin mapfile/readarray and process substitution is
extremely useful - mapfile also supports NUL as the delimiter, another great
method of eliminating whitespace & quoting bugs).

> In general, there's a big big difference between a simple shell one-liner
> that you use as a work amplifier in situations you know are well-behaved,
> and a piece of robust code that can behave gracefully no matter what weird
> input is thrown at it.  They're different use-cases.

It's not hard to write robust one-liners. It just takes practice - a matter
of developing good habits and stomping on bad habits until it's automatic.

And using tools like shellcheck to highlight common mistakes and bad
practices helps a lot - it's available as a command-line tool and as a
paste-your-code-here web service. https://www.shellcheck.net/

It's packaged for debian and probably most other distros and is, IMO,
essential for any shell user, even if (especially if!) you're just dabbling
with the simplest of shell scripts or one-liners.  I wish it had been around
when I was learning shell - I look at some of the shell code I wrote years ago
and just shudder at how awful it is.  I got better with practice, though :) I
made a lot of those mistakes because I simply didn't know they were mistakes,
didn't know how dangerous they were, didn't know any better at the time.
shellcheck solves that problem.

Package: shellcheck
Description-en: lint tool for shell scripts
 The goals of ShellCheck are:
 .
  * To point out and clarify typical beginner's syntax issues,
that causes a shell to give cryptic error messages.
 .
  * To point out and clarify typical intermediate level semantic problems,
that causes a shell to behave strangely and counter-intuitively.
 .
  * To point out subtle caveats, corner cases and pitfalls, that may cause an
advanced user's otherwise working script to fail under future circumstances.


Hastily written one-liners often lead to questions like "WTF happened to my
data?", "How can I reverse this 'sed -i' c

Re: Change a file name - remove a consistent string recursively

2023-01-16 Thread Craig Sanders via luv-main
On Fri, Jan 13, 2023 at 10:39:02PM +1100, Les Kitchen wrote:
> I'd do something like:
>
> find /Dir1 -type f | perl -lne '$o=$_; s/\.junk\././; print("mv -i $o $_") if 
> $_ ne $o;'

This is quite dangerous, for several reasons.  To start with, there's no
protection against renaming files over existing files with the same target
name.

It also doesn't distinguish between .junk. in a directory name vs in a file
name - it will just modify the first instance of ".junk." it sees in the
pathname. e.g. "./Dir1/My.junk.dir/my.junk.file.txt".  Probably not a problem
in practice, but something to be aware of.

Worse, it will break if any filenames contain whitespace characters (newlines,
tabs, spaces, etc - all of which are completely valid in filenames - the ONLY
characters guaranteed NOT to be in a pathname are / and NUL).

And because you're NOT quoting the filenames in your print statement, it
will also break if any filenames contains shell metacharacters like ; & > <
etc when the output is piped into sh. A simple fix might appear to be to use
single-quotes in the print statement - e.g. print("mv -i '$o' '$_'") - but
even this will break if a filename contains a single-quote character. Similar
for escaped double-quotes.

Shell can be very dangerous if you don't quote your arguments properly.
Consider, for example, what would happen if there happened to be a file called
";rm --no-preserve-root -rf /;" (or ";sudo rm ;") under /Dir1.  That's
a fairly extreme example of an obviously malicious filename, but there are
plenty of legitimate, seemingly innocuous filenames that WILL cause problems
if passed unquoted to the shell.

Whitespace and quoting issues in shell are well-known and long-standing,
and pretty much inherent to the way the shell parses its command line - the
subject of many FAQs and security advisories.

It's unfortunately very easy to improperly quote filenames - it's far harder
to do correctly and 100% safely than it seems at first glance.

For safety, if you were to DIY it with a command like yours above (there are
far better alternatives), you should use -print0 with find and the -0 option
with perl.

In fact, you should use NUL as the separator with ANY program dealing with
arbitrary filenames on stdin - most standard tools these days have -0 (or
-z or -Z) options for using NUL as the separator, including most of GNU
coreutils etc (head, tail, cut, sort, grep, sed, etc. For awk, you can use
BEGIN {RS="\0"} or similar).

Also:

1. perl has a built-in rename function, there's no need to fork mv (which
would be extremely slow if there are lots of files to rename).  And perl
isn't shell, so doesn't have problems with unquoted whitespace or shell
metacharacters in the filenames.  Still doesn't protect against clobbering
existing filenames without some extra code, though:

$ perldoc -f rename
rename OLDNAME,NEWNAME
Changes the name of a file; an existing file NEWNAME will be
clobbered. Returns true for success; on failure returns false
and sets $!.

Behavior of this function varies wildly depending on your system
implementation. For example, it will usually not work across
file system boundaries, even though the system *mv* command
sometimes compensates for this. Other restrictions include
whether it works on directories, open files, or pre-existing
files. Check perlport and either the rename(2) manpage or
equivalent system documentation for details.

For a platform independent "move" function look at the
File::Copy module.

Portability issues: "rename" in perlport.

2. Even better, a perl rename utility (aka file-rename, perl-rename, prename,
etc as mentioned in my previous message in this thread) already exists and
won't overwrite existing files unless you force it to with the -f option.

It also distinguishes between directories and file names (by default, it will
only rename the filename portion of a pathname unless you use the --path or
--fullpath option).  It can take filenames from stdin (and has a -0 option for
NUL-separated filenames) or as command-line args (e.g. with 'find ... -exec
rename  {} +')

craig
___
luv-main mailing list -- luv-main@luv.asn.au
To unsubscribe send an email to luv-main-le...@luv.asn.au


Re: Change a file name - remove a consistent string recursively

2023-01-16 Thread Craig Sanders via luv-main
On Thu, Jan 12, 2023 at 05:49:13PM +1000, Piers Rowan wrote:
> I have a structure like:
>
> /Dir1/123.junk.doc
> /Dir1/456.junk.pdf
> /Dir1/SubDir/1123.junk.doc
> /Dir1/SubDir/1456.junk.pdf
> /Dir2/SubDir/4321.junk.doc
> /Dir2/SubDir/7676.junk.pdf
> ...etc...
>
> I want some guidance as to how to make:
>
> 1123.junk.doc > 1123.doc
>
> $ID.junk.$EXT > $ID.$EXT

Using find and the perl rename utility (which is not the same as the rename
program in util-linux - that has completely different and incompatible command
line options):

find /Dir1/ -type f -exec rename -n 's/\.junk\././' {} +

That's a dry-run, it will only print what **would** be renamed, without
actually doing it.  Once you've confirmed that it's going to do what you want,
run it without -n, or change -n to -v for verbose operation.

Optionally add a `g` regex modifier to the s/// operation ('s/\.junk\././g')
if filenames might contain .junk. more than once)

perl rename allows you to use **any** perl code to rename files - from simple
sed-like regex transformations like the one above to quite complex scripts
(it's pretty simple to use sprintf to, say, zero-pad numbers in filenames so
that they sort correctly with just a plain numeric sort rather than a natural
sort).



Depending on your distro, the perl rename command might be rename, prename,
file-rename, or perl-rename. Try running each of them to find out what it's
called on your system.

On Debian and related distros it's in the `rename` package and (via the
/etc/alternatives system is executed as just "rename"):

Package: rename
Version: 2.00-1
Installed-Size: 57
Maintainer: Debian Perl Group 
Architecture: all
Depends: perl:any
Description-en: Perl extension for renaming multiple files
 This package provides both a perl interface for renaming files (File::Rename)
 and a command line tool 'file-rename' which is intended to replace the version
 that used to be supplied by the perl package.




You can confirm which variant of rename you have installed with the -V option,
which works for both perl rename and util-linux rename:

If you have the perl version installed, it will mention either perl or
File::Rename depending on how old your version is.

$ rename -V
/usr/bin/rename using File::Rename version 2.00, File::Rename::Options version 
1.99

With the util-linux version, it will mention util-linux:

$ rename -V
rename.ul from util-linux 2.38.1

WARNING: Again, these two programs are not at all compatible. Aside from -V,
you can't use perl rename options with util-linux rename or vice-versa.


(Debian systems often have both installed, with perl as /usr/bin/rename and
util-linux rename as /usr/bin/rename.ul. Other distros might have util-linux
as rename and perl rename as prename)

craig
___
luv-main mailing list -- luv-main@luv.asn.au
To unsubscribe send an email to luv-main-le...@luv.asn.au


Re: Life After CentOS

2022-12-21 Thread Craig Sanders via luv-main
On Sun, Dec 18, 2022 at 05:52:46PM +1000, Piers Rowan wrote:
> I'm sure we have all used a few distros in the past (like many!).
>
> So CentOS is going away where to next?

I forgot to mention this in my last post, but Rocky Linux is where a lot of
the Centos devs and users went.

https://rockylinux.org/

"Enterprise Linux, the community way.

 Rocky Linux is an open-source enterprise operating system designed to be
 100% bug-for-bug compatible with Red Hat Enterprise Linux®. It is under
 intensive development by the community."

So, basically the same idea/reason as Centos.



Personally, I've got no use for RHEL and thus never had any use for Centos - a
cheap/free version of something I don't use has no appeal.

I'd stick with - and recommend - Debian.

craig
___
luv-main mailing list -- luv-main@luv.asn.au
To unsubscribe send an email to luv-main-le...@luv.asn.au


Re: Life After CentOS

2022-12-18 Thread Craig Sanders via luv-main
On Sun, Dec 18, 2022 at 09:29:51PM +1100, Les Kitchen wrote:

> And if there's some package you really need a recent version of, sometimes
> you can just do a one-off install.

Or look in https://backports.debian.org/ - the latest version (or at least, a
very recent version) may have already been backported from sid.

craig

--
craig sanders 
___
luv-main mailing list -- luv-main@luv.asn.au
To unsubscribe send an email to luv-main-le...@luv.asn.au


Re: ed! the one true editor! (was Re: sed on a large file)

2022-12-12 Thread Craig Sanders via luv-main
On Sun, Dec 11, 2022 at 10:04:58PM -0800, Rick Moen wrote:
> Quoting Craig Sanders (c...@taz.net.au):
>
> > still useful) ed or ex. ed is the original unix text line editor, dating
> > back to 1969 and still included with modern unix & linux systems.
>
> It also famously had one of the most ultra-terse man pages in all of Unix,
> thus leading to this humour piece (now arguably obsolete now that Debian
> provided a rather more fleshed-out man page):

yeah, that was the inspiration for my subject line.

craig
___
luv-main mailing list -- luv-main@luv.asn.au
To unsubscribe send an email to luv-main-le...@luv.asn.au


Re: ed! the one true editor! (was Re: sed on a large file)

2022-12-12 Thread Craig Sanders via luv-main
On Mon, Dec 12, 2022 at 05:21:29PM +1100, Les Kitchen wrote:
> I too will put in a good word for good old ed.  I wrote most of my Ph.D.
> thesis using ed through a dial-up acoustic modem, but that was a long time
> ago.

ed is awful for interactive use but very useful for scripted use (e.g. with
printf as in my previous message). ex is even better.

> Since then, I've used it for system rescue when no other editor was
> available.

It's better than nothing if that's all you've got. better than the even more
minimalist `cat > filename` too.

These days, though, disk space isn't much of an issue even for rescue disks or
an initramfs - it's rare not to have at least a minimal vi available (nvi or
elvis or similar). maybe even vim.

Or nano which (as much as I dislike it and other-editors-which-aren't-vi) is
better for interactive use than ed.


> And before ssh-keygen got the -R option, ed was the quickest and most
> convenient way to remove offending host lines from my ssh known_hosts file
> by something like:
>
> ed ~/.ssh/known_hosts
> 140d
> wq
>
> That is, delete line 140, write and quit.

or automate it with:

printf '%s\n' 140 d w q | ed ~/.ssh/known_hosts

craig
___
luv-main mailing list -- luv-main@luv.asn.au
To unsubscribe send an email to luv-main-le...@luv.asn.au


ed! the one true editor! (was Re: sed on a large file)

2022-12-11 Thread Craig Sanders via luv-main
On Mon, Dec 12, 2022 at 12:35:11PM +1100, Jason White wrote:
> How would Perl (e.g., with -pi -e options) perform compared with Sed?

Like sed, perl creates a new file too but does it in a slightly different
way. IIRC, sed writes to a new file then renames it over the original, while
perl renames the original and then creates a new file with the original name
then deletes the renamed original file.  Different method, same basic result.

In most cases, the fact that it's a new file makes no difference. You will at
some point in the edit have two versions of the file on the disk at the same
time. IMO this is mostly irrelevant - if you don't have enough free space for
this then you have a much bigger problem to solve.

More significantly, however, the new file will have a new inode number which
will break any hard links (but not symbolic links) to that file if any exist.
As hard links aren't used all that often, this probably isn't a big deal
either...but it is definitely something you need to be aware of when using
perl or sed's -i options (and similar options in other programs).



If you want to do an actual in-place edit (with edits being performed on the
same file with the same inode), you could something like the ancient (but
still useful) ed or ex. ed is the original unix text line editor, dating
back to 1969 and still included with modern unix & linux systems. ex is
included with all versions of vi that I know of, including vim.  They're both
line-oriented text editors without fancy new-fangled stuff like a curses
visual interface.  They're very similar - ex is like a greatly enhanced
superset of ed. ex can do a lot more than ed: everything that vi or vim can do
except the "visual" stuff that only makes sense in a visual editor like vi.

e.g.

printf '%s\n' '/SET SQL/' i '-- ' . j w q | ed -s ./file.sql

or

printf '%s\n' '/SET SQL/' i '-- ' . j w q | ex ./file.sql

Here printf is used to pipe a series of commands into ed or ex (with a newline
character between each command):

 '/SET SQL/'   - regex search for line matching "SET SQL"
 i - insert text before current line
 '-- ' - the text to insert, two dashes and a space
 . - tells ed that input text is finished
 j - join the two lines ("-- " and "SET SQL...")
 w - write (i.e. save the edited file)
 q - quit

NOTE: ed/ex commands and input text which contain spaces, tabs, shell
metacharacters, etc will need to be quoted as usual.


Alternatively to just delete the line:

printf '%s\n' '/SET SQL/' d w q | ed -s ./file.sql

or

printf '%s\n' '/SET SQL/' d w q | ex ./file.sql


craig


PS: These programs are all related and share a common history. And they
all build upon a common set of basic editing commands. ex was an advanced
re-implementation of ed and vi was originally a visual mode for ex. and vim
is an enhanced re-implementation of vi.  Also, it's worth remembering that ed
is a **file** editor (as are ex and vi and vim), while sed was a **stream**
editor (nowadays, the -i option allows it to edit files too) - it modifies its
stdin and outputs to stdin.

The above is a gross simplification.  For a better and more detailed
simplification of the history, see:

https://en.wikipedia.org/wiki/Ed_(text_editor)
https://en.wikipedia.org/wiki/Sed
https://en.wikipedia.org/wiki/Ex_(text_editor)
https://en.wikipedia.org/wiki/Vi
https://en.wikipedia.org/wiki/Vim_(text_editor)


___
luv-main mailing list -- luv-main@luv.asn.au
To unsubscribe send an email to luv-main-le...@luv.asn.au


Re: LUV needs volunteers

2022-07-02 Thread Craig Sanders via luv-main
On Sat, Jul 02, 2022 at 09:10:28PM +1000, russ...@coker.com.au wrote:

RE: reverting to mailman 2:

> It probably can, but it's a pain and no-one is volunteering to do it.
>
> https://packages.debian.org/search?keywords=mailman

> But if you did it then you would have to support it, Mailman version
> 2.x is no longer in Debian.  At this time someone might suggest using a
> distribution other than Debian which still supports Mailman 2.x, but that
> makes supporting all the other stuff more difficult or requires a second VM.

I'm not volunteering to do the conversion (if the current lists work, why fix
them?), but...

The hard part would be converting the mailing lists, including subscriber
lists and archives back to mailman 2.x.  Mailman stores most of its config
data and subscriber lists in python's pickle format (a binary serialised
object dump - python devs think this kind of shit is a good idea*. Using a
fragile and non-human-readable/editable binary format can save nanoseconds,
perhaps even microseconds, compared to parsing plain text), and I expect the
objects and data structures have changed between the two versions.

* It can be a good idea in some circumstances, like quickly saving state for
a long-running process or saving an object into a key-value store or database
for convenient re-use by other threads on other machines. Internal-use-only
data.  Stuff that no human (or program written in another language) is ever
likely to need to interact with directly. Not for storing easily read config
data & subscriber lists.



Installing and running mailman 2.x is probably the easy part.  It still
runs fine on my debian sid system - I used apt-mark to hold it before
the transition to mailman3.  I even have a copy of the mailman 2.1.29-1
.deb file if anyone wants it, but you could probably get it easier from
archives.debian.org

You'd have to pick up a few obsolete dependencies from the archives, anyway -
like python (which is a dependancy-only package that depends on python2.7 and
related packages) and python-dnspython. python2.7 is still in debian.

It wouldn't be too hard to re-package mailman and python-dnspython to depend
on "python2.7" or "python2" instead of "python". just update the control file
and rebuild.

$ apt-cache show mailman
Package: mailman
Status: hold ok installed
Priority: optional
Section: mail
Installed-Size: 38226
Maintainer: Mailman for Debian 
Architecture: amd64
Source: mailman (1:2.1.29-1)
Version: 1:2.1.29-1+b1
Depends: libc6 (>= 2.14), python:any, debconf (>= 0.5) | debconf-2.0, 
logrotate, ucf, lsb-base, apache2 | httpd, cron | cron-daemon, python-dnspython
Pre-Depends: debconf | debconf-2.0
Recommends: default-mta | mail-transport-agent
Suggests: mailman3-full, spamassassin, lynx, listadmin
Conffiles:
 /etc/cron.d/mailman 52d6ae0d505899f9ea4bb416bcbcbaf5
 /etc/logrotate.d/mailman 3e74d5981b732c80bda238f4076e44aa
 /etc/mailman/apache.conf b95d3687fc49957b896e2f27bb5ee165
 /etc/init.d/mailman 0ac814ea3e2cf0952fa5fd3bfa2c9484 obsolete
Description: Web-based mailing list manager (legacy branch)
 The GNU Mailing List Manager, which manages email discussion lists.
 Mailman gives each mailing list a web page, and allows users to
 subscribe, unsubscribe, etc. over the web. The list manager can
 administer his or her list entirely from the web.
 .
 Mailman also integrates most things people want to do with mailing
 lists, including archiving, mail <-> news gateways, and so on. It
 has all of the features you expect from such a product, plus
 integrated support for the web (including web based archiving),
 automated bounce handling and integrated spam prevention.
 .
 Note that this package contains the legacy (2.x) branch of Mailman.
 All new development happens in the Mailman 3 suite, available in
 Debian via the mailman3 metapackage.
Description-md5: c0704b09355713376b05bc252b17bf2c
Homepage: http://www.list.org/



$ apt-cache show python-dnspython
Package: python-dnspython
Status: install ok installed
Priority: optional
Section: python
Installed-Size: 618
Maintainer: Debian Python Modules Team 

Architecture: all
Source: dnspython
Version: 1.16.0-1
Depends: python:any (<< 2.8), python:any (>= 2.7~)
Description: DNS toolkit for Python
 dnspython is a DNS toolkit for Python. It supports almost all record types. It
 can be used for queries, zone transfers, and dynamic updates. It supports TSIG
 authenticated messages and EDNS0.
 .
 dnspython provides both high and low level access to DNS. The high level
 classes perform queries for data of a given name, type, and class, and return
 an answer set. The low level classes allow direct manipulation of DNS zones,
 messages, names, and records.
Description-md5: 5c7ff37ec0055b47241c383f2c0e3c03
Homepage: http://www.dnspython.org

$ apt-cache show python
Package: python
Status: install ok installed
Priority: standard
Section: python
Installed-Size: 68
Maintainer: Matthias Klose 
Architecture: amd64
Multi-Arch: allowed
Source: python-defaults

Re: webcams

2021-11-04 Thread Craig Sanders via luv-main
On Fri, Nov 05, 2021 at 12:20:26AM +1100, Irving Tjiptowarsono wrote:
> I'm considering turning an old phone into a webcam, or using this as an
> excuse to grab the raspberry pi high quality camera module (already have
> most of the other bits)...

Dunno what kind of rpi you have, but I watched this a few weeks ago...might be
useful for you:

"Raspberry Pi Zero is a PRO HQ webcam for less than $100!" - Jeff Geerling

https://www.youtube.com/watch?v=8fcbP7lEdzY

craig

--
craig sanders 
___
luv-main mailing list -- luv-main@luv.asn.au
To unsubscribe send an email to luv-main-le...@luv.asn.au


Re: PCIe lanes

2021-05-31 Thread Craig Sanders via luv-main
On Sun, May 30, 2021 at 02:10:35PM +1000, russ...@coker.com.au wrote:
> This blog post describes PCIe bifurcation and how PCI lanes are allocated.
> I never properly understood PCIe before reading this.

Kind of related:

"Record Breaker: Toward 20 million i/ops on the desktop with Threadripper Pro"

https://www.youtube.com/watch?v=RfrMnVpPuVw

(lots of pci-e x4 nvme drives on linux)

craig
___
luv-main mailing list -- luv-main@luv.asn.au
To unsubscribe send an email to luv-main-le...@luv.asn.au


Oh My Git!

2021-03-08 Thread Craig Sanders via luv-main
I just saw a mention of this on Boing Boing and thought it might be of
interest here, especially for those who want to learn more about git.

It's an open source educational game that teaches how to use git (and,
yes, it uses a real git repo to do so).

https://ohmygit.org/

https://blinry.itch.io/oh-my-git


Full source at: https://github.com/git-learning-game/oh-my-git

OMG! is built with the open-source game engine Godot:
https://godotengine.org/

craig

--
craig sanders 
___
luv-main mailing list -- luv-main@luv.asn.au
To unsubscribe send an email to luv-main-le...@luv.asn.au


Re: storage error

2020-12-15 Thread Craig Sanders via luv-main
On Tue, Dec 15, 2020 at 06:48:32PM +1100, Russell Coker wrote:
> How likely is the following error (which happens periodically) to be on the
> M.  2 SATA device and how likely is it to be on the motherboard?

My guess would be that it's most likely the M.2 SATA device...because, in my
experience, drives suck and die a lot - which is why i'll never use less than
RAID-1 (or equivalent, such as mirrored drives in ZFS).

OTOH, while I've had LOTS of mechanical hard drives die on me over the years,
I've only ever had one SSD die (and even that died "gracefully" - could
still be read, but writess failed).  SSDs are, IME, a lot more reliable than
spinning rust.


But why rely on a guess when the obvious thing to do is to test it?

1. Try the M.2 device in another machine

If you don't have another motherboard with M.2 slots free, you can get
reasonably priced PCI-e adaptors that can take anywhere from 1 M.2 drive
(using 4 PCI-e lanes) to 4 M.2 drives (using all 16 PCI-e lanes).

These are a useful thing to have around, so it wouldn't be a one-use waste of
money.


2. Try another M.2 device in the motherboard.

The cheapest M.2 drive available now is around $30 for 120GB.  e.g.

$ cplgrep -p m.2 | head -n1
32  Kingston SA400M8/120G A400 M.2 SSD 120GB

You are, IMO, better off just buying another M.2 the same size or larger (if
it turns out to be the drive that's failing, you can immediately use it as
a replacement.  Otherwise, you've got a spare, or a drive to use in another
machine).



BTW, if your motherboard supports it, get M.2 NVME rather than M.2 SATA -
there's very little difference in price, and the NVME will be around 4 to 6
times faster - depending on brand and model, from ~2500 MB/s up to ~3500 MB/s
for PCI-e 3.0 NVME vs ~550 MB/s for SATA.

For PCI-e 4.0 NVME, it could theoretically get up to nearly 8 GB/s (less
protocol overhead), but current models max out around 5.5 or 6 GB/s.

PCI-e 5.0 will double that again in a year or three if SSD speeds keep up with
PCI-e bus speeds.

craig

--
craig sanders 

___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: trouble with debian mirrors

2020-11-06 Thread Craig Sanders via luv-main
On Sat, Nov 07, 2020 at 03:58:51PM +1100, b...@fastmail.fm wrote:
> [root@owl /etc/apt]# apt-get install inxi
> Reading package lists... Done
> Building dependency tree
> Reading state information... Done
> The following additional packages will be installed:
>   hddtemp libglew2.1 lm-sensors mesa-utils tree
> Suggested packages:
>   libcpanel-json-xs-perl | libjson-xs-perl libxml-dumper-perl glew-utils 
> fancontrol read-edid i2c-tools
> The following NEW packages will be installed:
>   hddtemp inxi libglew2.1 lm-sensors mesa-utils tree
> 0 upgraded, 6 newly installed, 0 to remove and 0 not upgraded.
> Need to get 731 kB of archives.
> After this operation, 2,625 kB of additional disk space will be used.
> Do you want to continue? [Y/n] y
> Err:1 http://ftp.au.debian.org/debian bullseye/main amd64 hddtemp amd64 
> 0.3-beta15-53
>   Connection failed [IP: 2001:388:1034:2900::25 80]
   ^^

OK, that's clearly using the IPv6 address for ftp.au.debian.org

Does your ISP support IPv6?  If not, then force apt to use ipv4 as i mentioned
in my last message.



BTW, if you don't know how to recognise an ipv4 or ipv6 address, it's pretty 
easy.

ipv4 address are short, usually printed in decimal, with **exactly** four
groups of 8-bit decimal numbers (i.e. 0 to 255), separated by periods ".".
e.g. 192.168.1.1. sometimes with an optional netmask suffix like /24 or /32.

ipv6 addresses are longer, usually printed in multiple groups of
four hexadecimal digits (i.e. 16 bits worth), separated by colons.
e.g. 2001:388:1034:2900::25

see:

https://en.wikipedia.org/wiki/IPv4#Addressing
https://en.wikipedia.org/wiki/IPv6_address

While it's useful to know the basics of IP networking, you don't actually
need to understand all the details.  For practical purposes, you really only
need to know if your ISP support ipv6 routing or not.  If not, then disable
it (at least on your uplink interface).  Most ISPs still do not support
ipv6. This is because they're slack-arse lazy bastards who find the shortage
of ipv4 addrress space convenient for limiting what their customers do on the
internet.



> Using host au debian is not found:
>
> [ben@owl ~]$ host ftp.au.debian.org/
> Host ftp.au.debian.org/ not found: 3(NXDOMAIN)

that's not a domain name. that's just a string ending in /

host doesn't know what to do with a string ending in a /

> [ben@owl ~]$ host http://ftp.au.debian.org
> Host http://ftp.au.debian.org not found: 3(NXDOMAIN)

that's not a domain name either.  That's a URL.

host doesn't know what to do with a URL either.

> The nz debian is found.
> [ben@owl ~]$ host ftp.nz.debian.org

that's a domain name. host knows what to do with one of those. which is why it
worked:

> ftp.nz.debian.org is an alias for mirror.fsmg.org.nz.
> mirror.fsmg.org.nz has address 163.7.134.112
> mirror.fsmg.org.nz has IPv6 address 2404:138:4000::


> When I changed sources.list to ftp.nz.debian.org, and tried installing inxi, 
> the error was:
>
> [root@owl /etc/apt]# apt-get install inxi
> Reading package lists... Done
> Building dependency tree
> Reading state information... Done
> E: Unable to locate package inxi

that's because you didn't update the packages lists after changing the repo.

when you change sources.list, you have to run "apt-get update" (or "apt
update", etc)


> Testing with the Force config. I used the cowsay program as the test:
> [...it worked...]

not surprising.  the problem was, as I suspected, that your machine was
using the mirror's IPv6 IP address rather than IPv4.

> I purged cowsay and then tried the nz mirror but it failed:
>
> [root@owl /home/ben/Downloads]# apt-get -o Acquire::ForceIPv4=true install 
> cowsay
> Reading package lists... Done
> Building dependency tree
> Reading state information... Done
> E: Unable to locate package cowsay

again, this is because you didn't run "apt-get update" after changing 
sources.list

> Whilst going through the process that was suggested and running
> the commands above, I returned to Firefox, and it now brings up
> http://ftp.au.debian.org/. The only extra thing that I have done which is
> not described above is to return the /etc/hosts file that I use to its
> original from the Pollock hosts file.
>
> ---start /etc/hosts---
> 127.0.0.1   localhost
> 127.0.1.1   owl
>
> # The following lines are desirable for IPv6 capable hosts
> ::1 localhost ip6-localhost ip6-loopback
> ff02::1 ip6-allnodes
> ff02::2 ip6-allrouters
> ---end /etc/hosts-


that hosts file looks OK.  And isn't relevant here because it doesn't have 
entries for either
ftp.au.debian.org or ftp.nz.debian.org.

Dunno why you're calling it a "Pollock" hosts file.

> I have now written the /etc/apt//apt.conf.d/zzz-ipv4.conf with the ipv4
> Force config as suggested.
>
> The nz mirror still fails. I would really like to be able to change mirrors
> if I need to in the future.

Just remember to run 'apt-get update' whenever you change 

Re: trouble with debian mirrors

2020-11-06 Thread Craig Sanders via luv-main
On Sat, Nov 07, 2020 at 12:01:54PM +1100, b...@fastmail.fm wrote:
> In my /etc/apt/sources.list I have been using the following mirror:
> deb http://ftp.au.debian.org/debian/ bullseye main non-free contrib
>
> The last few days it's returning error messages such as the following:
>
> E: Failed to fetch 
> http://ftp.au.debian.org/debian/pool/main/t/tree/tree_1.8.0-1+b1_amd64.deb  
> Unable to connect to ftp.au.debian.org:http:
> E: Unable to fetch some archives, maybe run apt-get update or try with 
> --fix-missing?
>
> Running the suggested apt-get update also fails, and with -f does too of 
> course.

Can you show the full output of "apt-get update"?  or "apt install tree"?
Without details, the best anyone can do is guessso here's some diagnostic
tests for you to try and a guess:


Can you connect to either http://ftp.au.debian.org/ or
http://ftp.nz.debian.org/ with a web browser?  I had no problems connecting to
either site with firefox, with both sites being very fast and responsive.

what happens if you try to traceroute to either site?



I'm suspecting there's something weird going on with your DNS resolution - or
perhaps your system is configured to use IPv6 in preference to IP4v addresses
or something (both sites have both IPv4 A records and IPv6  records).

$ host ftp.au.debian.org
ftp.au.debian.org is an alias for mirror.linux.org.au.
mirror.linux.org.au has address 150.203.164.37
mirror.linux.org.au has IPv6 address 2001:388:1034:2900::25

$ host ftp.nz.debian.org
ftp.nz.debian.org is an alias for mirror.fsmg.org.nz.
mirror.fsmg.org.nz has address 163.7.134.112
mirror.fsmg.org.nz has IPv6 address 2404:138:4000::

You can test this with:

apt-get -o Acquire::ForceIPv4=true update
apt-get -o Acquire::ForceIPv4=true install tree

If they work, make it permanent by adding the following to /etc/apt/apt.conf
(or to a file in the directory /etc/apt/conf.d/, give it a name like
zzz-ipv4.conf to make sure the filename comes last in the sort order)

Acquire::ForceIPv4 "true";

and make sure there isn't any line with 'Acquire::ForceIPv6 "true";' in
/etc/apt/apt.conf or any files in /etc/apt/apt.conf.d/

grep -ir ForceIPv6 /etc/apt/apt.conf /etc/apt/apt.conf.d/


> I'm writing to ask if the problem I'm having is common and there's a problem
> like congestion with the au mirror,

Nope, not common.  It's almost certainly some weird problem on your end,
either a misconfiguration or something weird your ISP is doing.

> and also how to change mirrors in the sources.list so that I can install and
> update.

the way you changed it should work.

for more details on the sources.list file format, run "man sources.list".

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Old OpenLDAP server and ProFTPD - "bad password" problem

2020-10-24 Thread Craig Sanders via luv-main
On Thu, Oct 22, 2020 at 11:00:10AM +1100, Peter Ross wrote:

> ftptest:ACJJox72N4DZQ:14740::9:7:::

that's a really ancient looking crypted password, probably hasn't been changed
in decades.

It doesn't even start with "$1$", so it was created before even MD5 hashing
became standard (let alone $5$ for sha256 or $6$ for sha512), and I can't even
remember when that happened, late 1990s perhaps.

Centos 5 was released in 2007, so a) I guess this server was migrated from
something even older and b) should be capable of more modern passwd hashing.


> Do you have any ideas what I can do to get ProFTPd working with it?

Two ideas:

1. change the password to itself (might require changing it to something else,
then back to whatever it was originally) so that it gets hashed again.

this should be easy to test with minimal work required. who knows, it may even
work :-)

remember to test that any old clients still work with the newly hashed passwd, 
as
well as the new proftpd server.

changing the password to itself **should** be harmless, but you'll probably
want to make a backup of the ldap config & auth data, just in case you have to
revert.



2. set up a VM with either a modern ldap proxy or a clone ldap server using
a modern version of the ldap daemon.

If the new VM works with both old and new clients, then congrats - you're well
on the way to migrating your ldap infrastructure to something modern.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Firefox menus disappear

2020-09-16 Thread Craig Sanders via luv-main
On Sun, Sep 06, 2020 at 07:29:29PM +1000, Brian May wrote:
> I realised that a computer where this just works has the latest Firefox
> which I downloaded from the Firefox website (now 80.0.1). Instead of the ESR
> version supplied with Debian (68.11.0esr).

68.11.0esr is what's in testing, right?

sid currently has 78.2.0esr

maybe try that?

or switch to firefox rather than firefox-esr.  I think testing currently
has 79.0. sid has 80.0.1


i used firefox-esr for a while until I had new versions of my plugins (or, at
least, reasonable replacements for most of them), but i don't really see any
point to firefox-esr now. It's moved past the version where xul plugins still
worked, so it's just firefox but a few versions behind.

craig

ps: e - upstream binaries. that's unhygienic.

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: copy and paste, under Wayland, and to and from an XTerm, was relating to dl audio and video

2020-08-30 Thread Craig Sanders via luv-main
On Mon, Aug 31, 2020 at 01:22:32AM +1000, Russell Coker wrote:

> > curl URL | sudo bash
>
> If you trust someone who's written the daemon then trusting their install
> script is no more risk.

part of my point is that anyone who suggests using brawndo-installer to install
their code can not be trusted.  they clearly fail the competence test.


> > > Now you can modify all scripts to use runuser, but that's going to be a
> > > pain. systemd does all this for you.
> >
> > which is nice if you never need to do anything in a way that the systemd
> > devs didn't think or use themselves. or deliberately disallow. in that
> > situation, you're fucked.
>
> Which situations are you thinking of?

being able to manually change the exact order that particular daemons/services
are loaded.

> The "kill processes in user session when they logout" was a good feature
> for some use cases and it was annoying that last time I checked the Debian
> systemd maintainers had disabled it instead of leaving it as a configuration
> option.

actually it was a terrible anti-feature. one that has little reason to
exist, and certainly should never be enabled by default. are the systemd devs
completely ignorant of decades of normal *nix usage? they certainly act like
it sometimes.

anyone who needs or wants that kill-all-my-processes-when-i-logout
functionality can do it in their .bash_logout (or equivalent for whatever
shell they run).

for everyone else, it's an extremely unpleasant surprise to find that the
program they deliberately left running with nohup or screen or tmux or
whatever was killed when they logged out. nobody expects their processes to be
killed just because they logout. that's not how things are supposed to work.

> > > Also systemd allows blocking daemons from accessing /home and other
> > > security features.
> >
> > no, the kernel does that. systemd just makes use of kernel namespace
> > features. it's not alone in being able to do that - containerd and docker
> > and firejail and many other programs make use of namespaces.
>
> Yes, but systemd has a configuration option for it.

so do other programs that make use of namespaces.

systemd devs like to pretend that their baby is the only program capable of
doing certain things, when that's not even remotely true.

half the things they claim as wonderful features exclusive to systemd are
re-implementations of ideas from other programs. and most of the rest are
things that an init system has no business doing anyway.


> > I really don't get why systemd fans are so terrified of shell scripts.
>
> Because I've worked on so many shell scripts written by other people.

So have I. I've also had to fix several broken unit files. It's possible to
create shit with any tool.

The reason i mention it is because "shell scripts are bad" is a common reason
brought up to prove that systemd is great and why it's better than any other
init. to me, cretinous statements like that say a lot more about the person
saying it than about either systemd or shell scripts. or sysvinit. etc.

Also, IME, shell scripts are easier to understand and fix when required.
They're easier to modify, easier to temporarily enable/disable parts of them
when testing stuff, easier to add diagnostic code, and more. lots of things
are easier with shell scripts.

They're not scary at all. sh scripting is a useful tool. systemd devs are
doing everyone a great dis-service by teaching them that they're scary and
difficult.

> > especially when systemd is so poorly documented. it's not as if systemd
> > unit files are paragons of clarity, or that there's any easy way to get an
> > overview of the dependency relationship between units.
>
> "systemd-analyze critical-chain" is very handy.

that tells you what happened during the current boot session. it doesn't tell
you what WILL happen during the next reboot.

AFAIK there's no way of listing which unit files will be loaded, or in what
order. e.g. definitive answers to questions like "will docker load before or
after zfs.mount?", not just "maybe. reboot and find out".

you can get that just by doing 'ls -l' of an /etc/rc[0-6].d/ directory.  The
Snn and Knn symlinks tell you the *exact* order that the scripts will run at
boot or shutdown time, and you can override the default with insserv and the
script's INIT INFO comments.



BTW, i used that example (of zfs.mount and docker) for a reason. My main
server runs docker configured to use zfs. Every reboot it's a gamble whether
docker will start before or after zfs.mount. If it starts after, it works
fine. That's what's supposed to happen.  If it starts after zfs.mount, it
breaks. and it creates and populates a fresh new /var/lib/docker directory,
which prevents my zfs docker dataset from being mounted (I have to manually
stop docker, and delete the directory before I can zfs mount /var/lib/docker
and restart docker).

this kind of shit is one of the reasons i'm reluctant to reboot my main server
if i can possibly 

Re: copy and paste, under Wayland, and to and from an XTerm, was relating to dl audio and video

2020-08-30 Thread Craig Sanders via luv-main
On Sun, Aug 30, 2020 at 09:25:08PM +1000, Russell Coker wrote:
> One feature of systemd is fast boot, unless it has an error and gets you 90
> second timeouts etc.

It can be (and often is) worse than just 90 seconds. I've had both bootups and
shutdowns delayed by 10s of minutes or longer while it retries and retries for
progressively longer times. it's really fucking annoying. and you can't even
hit Ctrl-C or something to force it to abort.

A few times i've let systemd try to shutdown or reboot for HOURS, just out
of curiousity to see if it will ever just give up on whatever's failing, and
reboot anyway.  AFAIK it would have gone on retrying forever if i hadn't
power-cycled it.


also, i don't reboot very often (on average, every two months or so. or even
less often - e.g while my desktop machines got kernel upgrades about a week
ago to 5.7.0-2-amd64 from debian sid, my main server/gateway box was last
rebooted 201 days ago and is still running 5.3.0-3-amd64). saving a few
seconds every year is hardly worth writing home about, and is more than offset
by how long it takes to shutdown or reboot (IME that's *always* slower under
systemd, usually by several minutes)


> What I like are the better security features.  One example is terminal
> control.  If you have an init script that launches a daemon under a
> different UID then if that daemon isn't started by runuser or something
> similar then the daemon can push characters into the keyboard buffer of the
> shell.

and if you're running a daemon or something that does that, then you've got
greater problems to worry about - anything that could install a compromised
daemon already has root access and doesn't need access to your keyboard
buffer.

i'm not saying that this kind of separation is worthless, just that it's not
enough by itself.

To start with, don't install packages from untrusted repositories and NEVER
use brawndo-installer, which is a favourite of idiot devs (usually python or
ruby or nodejs devs, but not exclusively) and looks something like this:

curl URL | sudo bash

or as a function, highlighting all of its "features":

brawndo() {
   curl $1 | sudo /usr/bin/env bash
}

> ps axo pid,sess,comm|grep $$
>
> The above command shows processes in the same session group as your shell.

or you could use namespaces and lsns or pgrep --ns

> Now you can modify all scripts to use runuser, but that's going to be a pain.
> systemd does all this for you.

which is nice if you never need to do anything in a way that the systemd devs
didn't think or use themselves. or deliberately disallow. in that situation,
you're fucked.


> Also systemd allows blocking daemons from accessing /home and other security
> features.

no, the kernel does that. systemd just makes use of kernel namespace features.
it's not alone in being able to do that - containerd and docker and firejail
and many other programs make use of namespaces.

> When it does those sorts of things they are in a standard format so you can
> grep service files to make sure that all daemons meet the security goals
> that you have instead of having to read lots of shell scripts written by
> different people for such things.

I really don't get why systemd fans are so terrified of shell scripts.

especially when systemd is so poorly documented. it's not as if systemd unit
files are paragons of clarity, or that there's any easy way to get an overview
of the dependency relationship between units.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: copy and paste, under Wayland, and to and from an XTerm, was relating to dl audio and video

2020-08-30 Thread Craig Sanders via luv-main
On Sun, Aug 30, 2020 at 05:24:39PM +1000, Mark Trickett wrote:
> I did not choose Wayland, nor systemd, but that is now the Debian
> defaults. There are good reasons behind the changes, or at least I have seen
> some support that I will concur with on why Wayland over xwindows. However I
> do not find benefit in systemd,

most of my machines run systemd now. all but one, which just fails to boot at
all if i try to boot it with systemd rather than sysvinit. I'm not entirely
happy with that but i've given up fighting against it, i just don't have the
energy for that.

I don't actually mind systemd as an init system, i.e. starting and stopping
services - it's pretty good at that. it's all the other stuff (like ntp and
dns and cron) that it tries to do that pisses me off, especially since it does
such a shitty job of them.

I just disable as much of that systemd borg shit as I can and run the required
services as extra daemons, like ntpd for ntp and unbound for dns resolution
(I still use bind9 on a different IP address for serving my zones) and vixie
cron.

> the current install (Debian 10.5) is missing a piece of firmware, but I
> cannot read the message in time during boot, nor find it in the logs. I
> think it is for the network on the motherboard.

try something like 'journalctl -xe -b0', or 'dmesg | less'.  You may be able
to find the message you want by searching through that.

> From my reading of pages found by a google search, there is a choice by the
> development team based on (in)security of casual copy and paste. I thought
> that it was likely a configuration issue, but cannot find. I tried a number
> of teminals, but not a lot, to find one that appears to be reasonable. I
> still need to do more research, when life leaves the time from the real
> world.

that would be kind of fucked up if they've disabled copy-paste into terminals
or text editors entirely. so fucked up that that i can't believe anyone would
do that deliberately...there has to be some way to do it, or some way to
re-enable it.

off-by-default is merely annoying, but no big deal. off-forever would be
inexcusable.


> I cannot make sense of a line of perl at this time,

perl's really not that hard. if you have some basic scripting capability
(or an ability to understand algorithms and flow control), and some passing
familiarity with shell scripts or sed or awk, then perl is easy.

it's nowhere near as difficult as some people like to make out.  and learning
some perl is at least as useful as learning sh and sed and awk put together.

just ignore all the dickheads who say bullshit like "perl is write-only" or
that it "looks like line-noise". they have no idea what they're talking about
- at best, they're just scared of the sight of regexps as operators - beware
the wild regexp, cage it in a function call or it'll get you.


> but there are some folks I will trust, such as you and Russel Coker. I do
> not expect you to be perfect, but that you do know more than I, and from
> what I have seen, not malicious. I do try to comprehend even your examples
> first, but have to trust that you do know that much more in the subject of
> concern.

oh. I didn't mean me. you can trust anything i say, i'm perfect and
never make a mistake.

btw, i have a nice bridge i'd like to sell you cheap.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: copy and paste, under Wayland, and to and from an XTerm, was relating to dl audio and video

2020-08-29 Thread Craig Sanders via luv-main
On Tue, Aug 18, 2020 at 04:54:08PM +1000, Mark Trickett wrote:
> Many thanks for your excellent posts, I am learning more. However I have
> Debian 10, nominally up to date, and it has Wayland with Gnome as the
> desktop. I am finding it very frustrating that I cannot copy and paste to
> and from the XTerm window.

Sorry, i don't use Wayland, have no idea what could be going wrong with this.

I can't see the point of Wayland.  TBH, it seems like the systemd of X - a
half-arsed crappy partial implementation of only the stuff that the devs
personally use because there's no way that anyone else could ever need
anything they don't use.

Also, CADT syndrome: never fix anything. toss out the old garbage, make way
for the shiny new garbage. Fixing bugs is boring. Reimplementing from scratch
every year or two is fun and exciting and it'll be perfect. For sure, this
time.

> I used to be able to do with earlier terminal emulation under the XWindows
> system. I used it to be able to copy text from a terminal into an email, and
> commands back from email, ensuring that I did not make typos.

That's weird. i'd be surprised if Wayland was actually incapable of doing
something as basic as copy and paste between terminal windows, so it's
probably a bug or a configuration error.

Maybe try a different terminal instead of xterm.  There are dozens to
choose from. I mostly use roxterm (full-height apart from the space used by
xfce4-panel, full-width, approx 250x60 depending on font size - great for
viewing log files), but sometimes I use xfce4-terminal if i want a tall,
narrow window (80 or 132 x 60) to fit beside something else.


> I do understand that there can be security issues if used without a measure
> of care and thoughtful, but it also has much merit when coping with some of
> the regular expressions that come up as examples in email and on web pages.

the "security issues" comes from blindly executing code/commands that you
don't understand.

treat everything as just an example that needs further research. never execute
something posted by someone else(*) unless you know what it does and how and 
why.


(*) ANYONE else. even if they're trustworthy and not malicious, they could be
wrong, they might have made a mistake.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: [MLUG] Advice needed about ZFS

2020-08-29 Thread Craig Sanders via luv-main
On Wed, Aug 26, 2020 at 10:43:30AM +, stripes theotoky wrote:
> > I would suggest looking at something like a Network Attached Storage
> > device, with multiple drives in a suitable RAID array.
>
> This is the ultimate plan to build a NAS from an HP Microserver. I am
> leaning towards Nas4Free on an SSD or internal USB and 3, 6TB mirrors. This
> is a project that has to wait because right now due to Covid19 and
> Brexit we are not sure where we are.  I am here and can't leave but
> expecting to be out of work (which won't stop my research), my husband is
> British/Australian, resident in Austria to avoid Brexit but is stranded by
> Covid in Greece. When it all settles down and we have a home again building
> this NAS is going to be pretty high on the list of things to do.

In the meantime, you can use a largish (>= 4 or 6 TB) external USB drive set
up to be a ZFS pool for backups.

Then 'zfs send' your snapshots to the USB drive, and keep a multi-year
snapshot history on them.  Aggressively expire the snapshots in your laptop to
minimise the amount of space they're taking.

You can have multiple USB backup drives like this - each one has to be
initialised with a full backup, but can then be incrementally updated with
newer snapshots.  Each backup pool should have a different name - like
backup1, backup2, etc.

You can automate much of this with some good scripting, but your scripts
will need to query the backup destination pool (with 'zfs list') to find out
what the latest backup snapshot on it is.  Incremental 'zfs send' updates
send the difference between two snapshots, so you need to know what the
lastest snapshot on the backup pool is AND that snapshot has to sill exist
on the source pool.

You should use a different snapshot naming scheme for the backup snapshots.
If your main snapshots are "@zfs-autosnap-MMDD" or whatever, then use
"@backup-MMDD".  Create that snapshot, and use it for a full zfs send,
then create new "@backup-MMDD" snapshots just before each incremental
send.

e.g. the initial full backup on a pool called "source" to a pool called
"backup", if you had done it yesterday:

zfs snapshot source@backup-20200829
zfs send -v -R source@backup-20200829 | /sbin/zfs receive -v -d -F backup

and to do an incremental backup of *everything* (including all snapshots
created manually or by zfs-autosnap) from @backup-20200829 to today between
the same pools:

# source@backup-20200829 already exists from the last backup, no need to 
create it.
zfs snapshot source@backup-20200830
zfs send -R -i source@backup-20200829 source@backup-20200830 | zfs receive 
-v -u -d backup

** NOTE: @backup-20200829 has to exist on both the source & backup pools **

Unless you need to make multiple backups to different pools, you can delete
the source@backup-20200829 snapshot at this point because the next backup will
be from source@backup-20200830 to some future @backup-MMDD snapshot.


BTW, you don't have to backup to the top level of the backup pool. e.g. to
backup to a dataset called "mylaptop" on pool backup:

zfs create backup/mylaptop
zfs snapshot source@backup-20200829
zfs send -R -i source@backup-20200829 source@backup-20200830 | zfs receive 
-v -u -d backup/mylaptop

(you'd do this if you wanted to backup multiple machines to the same backup
drive. or if you wanted to use it for backups AND for storage of other stuff
like images or videos or audio files).



and, oh yeah, get used to using the '-n' aka '--dry-run' and '-v'/'--verbose'
options with both 'zfs send' and 'zfs receive' until you understand how they
work and are sure they're going to do what you want.


NOTE: as a single drive vdev, there will be no redundancy in the USB backup
drive. but I'm guessing that since you're using a laptop, it's probably also
a single drive and that you're only using ZFS for the auto compression and
snapshot capabilities.  If you want redundancy, you can always plug in two USB
drives at a time and set them up as a zfs mirrored pool, but then you have to
label them so that you know which pairs of drives belong together

This is not as good as a NAS but it's cheap and easy and a lot better than
nothing.


I recommend using USB drive adaptors that allow you to use any drives in them
(i.e. USB to SATA adaptors), not pre-made self-contained external drives (just
a box with a drive in it and a USB socket or cable).

Sometimes you see them with names like "disk docking station", with a power
adaptor, a USB socket, and SATA slots for 1, 2, or 4 drives.  Other forms
include plain cables with a USB plug on one end and a SATA socket on the
other.

craig

ps: If your backup pool was on some other machine somewhere on the internet,
you can pipe the zfs send over ssh. e.g.

zfs send -R -i source@backup-20200829 source@backup-20200830 | ssh 
remote-host zfs receive -u -d poolname/dataset

The pool on your laptop is probably small enough that you could do the initial
full backup over 

Re: [MLUG] Advice needed about ZFS

2020-08-20 Thread Craig Sanders via luv-main
On Thu, Aug 20, 2020 at 01:40:03PM +0100, stripes theotoky wrote:
> When we started this discussion we had this
>
> stripinska@Stripinska:~$ sudo zfs list
> NAME   USED  AVAIL  REFER
>  MOUNTPOINT
> alexandria 332G  1.06G96K
>  /alexandria

> Now having moved 47.3 GB of files to an external drive I have this.
>
> stripinska@Stripinska:~$ sudo zfs list
> NAME USED  AVAIL  REFER
>  MOUNTPOINT
> alexandria   332G   782M96K
>  /alexandria
>
> What is eating my space?

to truly delete files from ZFS (or from anything that supports snapshots), you
need to delete not only the file(s) from the current state of the filesystem,
but also any snapshots containing them.

The space will not be freed until there are no remaining snapshots containing
the file(s) you deleted.

Note that you can't delete individual files from a snapshot, you can only
delete entire snapshots.  This will have a significant impact on your backup
and recovery strategy.

> It is not the cache for Firefox as this is only 320M.

Maybe, maybe not.  Browser cache directories tend to change a lot, so they end 
up
using a lot more space in the snapshots than you might think because they keep
all the cached junk that your browser itself has deleted, but your zfs-autosnap
hasn't expired yet.

There really isn't much value in keeping snapshots of cache dirs like this, so
try creating a new dataset to hold these caches (and make sure it, or at least
sub-directories on it, are RW by your uid).  Configure zfs-autosnap to ignore
it (i.e. no snapshots), and then configure your browser to use it for caches.

I don't use zfs-auto-snap myself, but according to the man page, to exclude
a dataset from zfs-auto-snap, you need to create a property on it called
"com.sun:auto-snapshot' and set it to false. e.g.

zfs create alexandria/nosnaps
zfs set com.sun:auto-snapshot=false alexandria/nosnaps

(btw, you can also set a quota on a dataset so that it can't use all available
space - better to have firefox die because it can't cache extra stuff than to
have other random programs fail or crash due to out of space errors)

If you have multiple programs that keep caches like this, you could create
one dataset each for them.  IMO, it would be better to create just one
dataset (call it something like "alexandria/nosnaps") and then create as many
sub-directories as you need under it.

make /alexandria/nosnaps/stripes/ readable and writable by user 'stripes', and
your programs can create directories and files underneath it as needed. e.g.
something like

mkdir -p /alexandria/nosnaps/stripes/firefox-cache
chown -R stripes /alexandria/nosnaps/stripes
chmod u=rwX /alexandria/nosnaps/stripes



I'm not entirely sure how to change the cache dir in firefox, but i am certain
that it can be done, probably somewhere in "about:config".  At worst, you
can either set the mountpoint of the "nosnaps" dataset to be the cache dir
(rememeber to quit from firefox and delete the existing cache first), or by
symlinking into a subdir under "nosnaps".  The latter is better because it
enables multiple diffferent cache dirs under the one nosnaps dataset.



BTW, if you download stuff with deluge or some other torrent client, you should
make an alexandria/torrents dataset with a recordsize of 16K instead of the
default 128K (bit torrent does a lot of random reads and writes in 16KB
blocks, so this is the optimum recordsize). for example:

  zfs create -o recordsize=16k -o mountpoint=/home/stripes/torrents/download 
alexandria/torrents
  chown stripes /home/stripes/torrents/download
  chmod u=rwx /home/stripes/torrents/download


zfs-autosnap should be configured to ignore this dataset too, and your torrent
client should be configured to download torrents into this directory, and then
move them to somewhere else once the download has completed.  This avoids
wasting space on snapshots of partially downloaded stuff, AND minimises
fragmented (as the downloaded torrents will be de-fragmented when they're
moved to somewhere else)



there are probably other things that don't need to be snapshotted - like
/tmp, /var/tmp, maybe (parts of) /var/cache, and other directories containing
short-lived transient files.  I wouldn't bother doing anything about them
unless they waste a lot of disk space.


> How do I get it back before the box freezes.

1. for such a small filesystem, I recommend setting a more aggressive snapshot
expiration policy for zfs-autosnap.  From what you've posted, it looks like
zfs-autosnap is configured to keep the last 12 months or so snapshots but you
probably can't afford to keep more than the last three to six months or so of
snapshots.

zfs-auto-snap doesn't seem to have a config file, the snapshot retention
is handled by command-line options, which you can see if you look 

Re: BBC radio program, saving the audio

2020-08-17 Thread Craig Sanders via luv-main
On Mon, Aug 17, 2020 at 04:13:30PM +1000, Craig Sanders wrote:
> The file size is nearly a gigabyte.  You can use ffmpeg or handbrake
> or something to re-encode with x265 to get it smaller (i'd guess that
> transcoding the video to x265 would probably shrink the file by a third to
> a half. maybe more).  If you're only going to watch it once and delete it,
> don't bother...but might be worthwhile if you want to archive it for repeat
> viewing.

I transcoded it with handbrake, using the Matroska H.265 720p30 pre-set, but
it only shrunk to 807 MB.

It took nearly an hour to transcode...and that's on my 16-core threadripper
1950x. a lot of CPU time and electricity used (and the fans ramped up to
deal with the excess heat generated) for not much benefit.

This would be significantly faster if my version of handbrake (the debian
handbrake 1.3.1+ds1-2 package) supported NVENC for GPU hardware encoding,
but it doesn't. It's probably a licensing conflict with GPL vs proprietary
nvidia-licensed code that prevents debian from distributing it. I'd have
to re-compile the package on my own machine to enable GPU-encoding.
Apparently the Ubuntu PPA 1.3.3 version of handbrake has it enabled.

Transcoding using the Matroska H.265 576p25 pre-set (i.e. not only transcoding
to a better/newer encoder, but also reducing the resolution from 1280x720 to
720x576 - same aspect ratio of 16:9) in handrake reduced the file from 998 MB
to 548 MB.  There's some reduction in quality, but not much.

This took about 40 minutes to transcode.

not worth the bother unless you want to archive and are really short on disk
space.

and even then it would better to use youtube-dl's format and quality options
to download a smaller, lower-qualty video in the first placethey've
been transcoded from much higher quality master videos.

e.g.

$ youtube-dl -F 
https://iview.abc.net.au/show/fight-for-planet-a-our-climate-challenge/series/1/video/DO1904H001S00
[abc.net.au:iview] DO1904H001S00: Downloading JSON metadata
[abc.net.au:iview] DO1904H001S00: Downloading webpage
[abc.net.au:iview] DO1904H001S00: Downloading m3u8 information
[info] Available formats for DO1904H001S00:
format code  extension  resolution note
hls-287  mp4320x180 287k , avc1.77.30, mp4a.40.2
hls-491  mp4512x288 491k , avc1.77.30, mp4a.40.2
hls-619  mp4640x360 619k , avc1.77.30, mp4a.40.2
hls-917  mp4800x450 917k , avc1.640028, mp4a.40.2
hls-1307 mp41024x576   1307k , avc1.640028, mp4a.40.2
hls-1969 mp41280x720   1969k , avc1.640028, mp4a.40.2
hls-2099 mp41280x720   2099k , avc1.640028, mp4a.40.2 (best)

I'm far from sure, but i'm guessing that the 287k, ... 2099k etc mentioned in
the notes field (and also in the hls-* format codes) are the bandwith rates
for streaming, from 287 Kilobytes per second to 2.1 Mbps.


Anyway, you could download the 576p (1024x576) version with:

$ youtube-dl -f hls-1307 --write-sub --all-subs --embed-subs 
https://iview.abc.net.au/show/fight-for-planet-a-our-climate-challenge/series/1/video/DO1904H001S00
[abc.net.au:iview] DO1904H001S00: Downloading JSON metadata
[abc.net.au:iview] DO1904H001S00: Downloading webpage
[abc.net.au:iview] DO1904H001S00: Downloading m3u8 information
[info] Writing video subtitles to: Series 1 Ep 1-DO1904H001S00.en.vtt
[hlsnative] Downloading m3u8 manifest
[hlsnative] Total fragments: 397
[download] Destination: Series 1 Ep 1-DO1904H001S00.mp4
[download] 100% of 653.70MiB in 01:09
[ffmpeg] Fixing malformed AAC bitstream in "Series 1 Ep 1-DO1904H001S00.mp4"
[ffmpeg] Embedding subtitles in 'Series 1 Ep 1-DO1904H001S00.mp4'
Deleting original file Series 1 Ep 1-DO1904H001S00.en.vtt (pass -k to keep)


It's larger than my transcoded version because it's h.264 rather than the
newer/better h.265but it only took a minute to download with my 100 Mbps
NBN connection, much better than 40+ minutes to transcode.


The hls-917 (800x450) version is even smaller (438 MB), and still looks OK to
watch even when upscaled to fullscreen on my 2560x1440 27" monitor, or my 42"
1920x1080 TV.



$ ls -lh Series\ 1\ Ep\ 1-DO1904H001S00*.mp4
-rw-r--r-- 1 cas cas 998M Aug 17 15:16 Series 1 Ep 1-DO1904H001S00-best.mp4
-rw-r--r-- 1 cas cas 623M Aug 18 14:05 Series 1 Ep 1-DO1904H001S00-hls-1307.mp4
-rw-r--r-- 1 cas cas 438M Aug 18 14:10 Series 1 Ep 1-DO1904H001S00-hls-917.mp4

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: BBC radio program, saving the audio

2020-08-17 Thread Craig Sanders via luv-main
On Sun, Aug 16, 2020 at 07:48:57AM +1000, Mark Trickett wrote:
> I heard a BBC documentary on ABC Radio National, in the World Docos
> segment. I want the audio to pass on to others, and I would strongly
> recommend it to all here. I can get it to play, but not to save, yet.
>
> https://www.bbc.co.uk/programmes/m000jtmv

while there are lots of options with youtube-dl for selecting download format
and quality, a basic download using that URL works well here. e.g.

(btw, you can ignore the "HTTP Error 403: Forbidden" warnings - some BBC formats
are geo-locked to the UK - for BBC Subscribers only, i guess)


$ youtube-dl 'https://www.bbc.co.uk/programmes/m000jtmv'
[bbc.co.uk] m000jtmv: Downloading video page
[bbc.co.uk] m000jtmv: Downloading playlist JSON
[bbc.co.uk] m000jtmt: Downloading media selection XML
[bbc.co.uk] m000jtmt: Downloading MPD manifest
[bbc.co.uk] m000jtmt: Downloading m3u8 information
[bbc.co.uk] m000jtmt: Downloading m3u8 information
WARNING: Failed to download m3u8 information: HTTP Error 403: Forbidden
[bbc.co.uk] m000jtmt: Downloading MPD manifest
WARNING: Failed to download MPD manifest: HTTP Error 403: Forbidden
[bbc.co.uk] m000jtmt: Downloading m3u8 information
WARNING: Failed to download m3u8 information: HTTP Error 403: Forbidden
[bbc.co.uk] m000jtmt: Downloading m3u8 information
WARNING: Failed to download m3u8 information: HTTP Error 403: Forbidden
[bbc.co.uk] m000jtmt: Downloading MPD manifest
[bbc.co.uk] m000jtmt: Downloading m3u8 information
[bbc.co.uk] m000jtmt: Downloading m3u8 information
WARNING: Failed to download m3u8 information: HTTP Error 403: Forbidden
[bbc.co.uk] m000jtmt: Downloading MPD manifest
WARNING: Failed to download MPD manifest: 
[bbc.co.uk] m000jtmt: Downloading m3u8 information
WARNING: Failed to download m3u8 information: 
[bbc.co.uk] m000jtmt: Downloading m3u8 information
WARNING: Failed to download m3u8 information: 
[dashsegments] Total fragments: 131
[download] Destination: Watching Us, Week 1-m000jtmt.m4a
[download] 100% of 9.60MiB in 01:58
[ffmpeg] Correcting container in "Watching Us, Week 1-m000jtmt.m4a"

$ mediainfo /tmp/Watching\ Us\,\ Week\ 1-m000jtmt.m4a
General
Complete name: /tmp/Watching Us, Week 1-m000jtmt.m4a
Format   : MPEG-4
Format profile   : Base Media
Codec ID : isom (isom/iso2/mp41)
File size: 9.59 MiB
Duration : 13 min 51 s
Overall bit rate mode: Constant
Overall bit rate : 96.8 kb/s
Writing application  : Lavf58.45.100

Audio
ID   : 1
Format   : AAC LC SBR
Format/Info  : Advanced Audio Codec Low Complexity 
with Spectral Band Replication
Commercial name  : HE-AAC
Format settings  : Explicit
Codec ID : mp4a-40-2
Duration : 13 min 51 s
Bit rate mode: Constant
Bit rate : 96.0 kb/s
Channel(s)   : 2 channels
Channel layout   : L R
Sampling rate: 48.0 kHz
Frame rate   : 23.438 FPS (2048 SPF)
Compression mode : Lossy
Stream size  : 9.52 MiB (99%)
Language : English
Default  : Yes
Alternate group  : 1


I tested the download with smplayer, sounds pretty good.



> I do have youtube-dl installed,and command runner firefox extension,

i've never used youtube-dl with a firefox extension, only from the command
line.

> but not yet sure how to go about. I would also appreciate examples to
> be able to use them to download from ABC iView, I want "saving Planet
> A".

i'm going to assume you mean "Fight For Planet A: Our Climate Challenge" with
Craig Reucassel from The Chaser, because that's the closest thing that that
came up when I searched the iview site.

ABC iview is a bit trickier.  Finding the exact URL for a recording
can be difficult.

First, start with the main URL for the show: 
https://iview.abc.net.au/show/fight-for-planet-a-our-climate-challenge

Use that with `lynx -dump -listonly -nonumbers` and `grep` to find the actual 
download URL.

$ lynx -dump -listonly -nonumbers 
'https://iview.abc.net.au/show/fight-for-planet-a-our-climate-challenge' | grep 
-i ^http.*/video/
https://www.facebook.com/sharer.php?u=https://iview.abc.net.au/show/fight-for-planet-a-our-climate-challenge/series/1/video/DO1904H001S00
https://twitter.com/share?url=https://iview.abc.net.au/show/fight-for-planet-a-our-climate-challenge/series/1/video/DO1904H001S00=Fight
 For Planet A: Our Climate Challenge Series 

Re: SpamAssassin rule problem

2020-04-15 Thread Craig Sanders via luv-main
On Tue, Apr 14, 2020 at 01:55:42PM +1000, Russell Coker wrote:
> header  SWS_AUTHReceived =~ /Authenticated sender/
> describeSWS_AUTHMail from local SASL is good
> score   SWS_AUTH-10
>
> I have the above in my SA local.conf file.
>
> Received: from liv (unknown [x.x.x.x])
> [...]
>   (Authenticated sender: russ...@coker.com.au)
>   by smtp.sws.net.au (Postfix) with ESMTPSA id 4BB77F173
> [...]
> X-Spam-Status: No, score=-0.2 required=5.0 tests=DKIM_SIGNED=0.1,
>   DKIM_VALID=-0.1,DKIM_VALID_AU=-0.1,DKIM_VALID_EF=-0.1,
>   UNPARSEABLE_RELAY=0.001 autolearn=disabled version=3.4.2
>
> Above are the relevant headers of a message coming through.  Why am I not
> getting a -10 added to the score?


This section of Mail::SpamAssassin::Conf(3pm) might be relevant:

header SYMBOLIC_TEST_NAME header op /pattern/modifiers [if-unset: STRING]
.
.
.
If the header specified matches multiple headers, their text will
be concatenated with embedded \n's. Therefore you may wish to use
"/m" if you use "^" or "$" in your regular expression.

There's almost always multiple Received: headers in a message, and they're
often multi-line strings so, even though you're not using ^ or $ anchors in
the pattern, it might be worth trying:

 headerSWS_AUTHReceived =~ /Authenticated sender/m


BTW, I have several very similar rules in my local.conf, but matching Subject:
and other headers - none for a Received: header.  They work just fine.


PS: did you reload amavisd, spamc, spampd, or whatever spamassassin daemon you
use AFTER you changed the local.conf file?  Changes don't take effect until
they're loaded in and the regexps compiled.

craig
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Rebuild after disk fail

2020-01-29 Thread Craig Sanders via luv-main
On Tue, Jan 28, 2020 at 08:06:18PM +1100, russ...@coker.com.au wrote:
> On Monday, 20 January 2020 2:34:09 AM AEDT Craig Sanders via luv-main wrote:
> > > [ paraphrased from memory because I deleted it: Russell said  ]
> > > [ something about using btrfs on small boxes, and zfs only on ]
> > > [ big storage servers ]
> >
> > Unless you need to make regular backups from workstations or small servers
> > to a "big storage" ZFS backup server. In that case, use zfs so you can use
> > 'zfs send'.  Backups will be completed in a very small fraction of the
> > time they'd take with rsyncthe time difference is huge - minutes vs
> > hours.  That's fast enough to do them hourly or more frequently if needed,
> > instead of daily.
>
> It really depends on the type of data.

No, it really doesn't.

> Backing up VM images via rsync is slow because they always have relatively
> small changes in the middle of large files.

rsyncing **ANY** large set of data is slow, whether it's huge files like VM
images or millions of small files (e.g. on a mail server).

rsync has to check at least the file sizes and timestamps, and then the block
checksums on every run. On large sets, this WILL take many hours, no matter
how much or how little has actually changed.

'zfs send' and 'btrfs send' already know exactly which blocks have changed and
they just send those blocks, no need for checking.  Why? Because a snapshot is
effectively just a list of blocks in use at a particular point in time.  COW
ensures that if a file is created or changed or deleted, the set of blocks in
the next snapshot will be different.

(a minor benefit of this is that if a file or directory is moved to another
directory in the same dataset, the only blocks that actually changed were the
blocks containing the directory info, so they're the only blocks that need be
sent. rsync, however, would send the entire directory contents because it's
all "new" data. Transparent compression also helps 'zfs send' - compressed
data requires fewer blocks to storer itrsync, though, can't benefit from
transparent compression as it has to compare the source file's *uncompressed*
data with the target copy)

rsync is still useful as a tool for moving/copying data from one location to
another (whether on the same machine or to a different machine), but it's no
longer a good choice for backups. it just takes too long - by the time it has
finished, the source data will have changed.  It's an improved "cp".

I guess it's also still useful for backing up irrelevant machines like those
running MS Windows. But they should be storing important data on the file
server anyway, so they can be blown away and re-imaged whenever required.

> I guess you have to trade off the features of using one filesystem
> everywhere vs the ability to run filesystems independently of what
> applications will run on top.  I like the freedom to use whichever
> filesystem best suits the server.

I prefer to use the filesystem that's best for all machines on the network.

If ZFS is in use on the file-server or backup-server, then that means zfs
on everything else. If it's btrfs on the server, then it should be btrfs on
everything.

send/receive alone are worth putting in the time & effort to standardise, and
both zfs & btrfs also offer many more very useful features.

And if neither is currently in use, then that means scheduling appropriate
times & days to convert everything over to ZFS, starting with the server(s).
btrfs is not an option here because it just isn't as good as zfs...if i'm
going to go to all that trouble and hassle, i may as well get the most/best
benefit in exchange.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Rebuild after disk fail

2020-01-29 Thread Craig Sanders via luv-main
On Tue, Jan 28, 2020 at 08:02:15PM +1100, russ...@coker.com.au wrote:
> Having a storage device fail entirely seems like a rare occurance.  The only
> time it happened to me in the last 5 years is a SSD that stopped accepting
> writes (reads still mostly worked OK).

it's not rare at all, but a drive doesn't have to be completely non-responsive
to be considered "dead".  It just has to consistently cause enough errors that
it results in the pool being degraded.

I recently had a seagate ironwolf 4TB drive that would consistently
cause problems in my "backup" pool (8TB in two mirrored pairs of 4TB
drives, i.e. RAID-10, containing 'zfs send' backups of all my other
machines). Whenever it was under moderately heavy load, it would cause enough
errors to be kicked, degrading the pool.  I didn't have a spare drive to
replace it immediately, so just "zpool clear"-ed it several times.  Running a
scrub on that pool with that drive was guaranteed to degrade the pool within
minutes.

and, yeah, i moved it around to different SATA & SAS ports just in case it was
the port and not the drive. nope. it was the drive.

To me, that's a dead drive because it's not safe to use. it can not be trusted
to reliably store data. it is junk. the only good use for it is to scrap it
for the magnets.


(and, btw, that's why I use ZFS and used to use RAID. Without redundancy from
RAID-[156Z] or similar, such a drive would result in data loss. Even worse,
without the error detection and correction from ZFS, such a drive would result
in data corruption).

> I've had a couple of SSDs have checksum errors recently and a lot of hard
> drives have checksum errors.  Checksum errors (where the drive returns what
> it considers good data but BTRFS or ZFS regard as bad data) are by far the
> most common failures I see of the 40+ storage devices I'm running in recent
> times.

a drive that consistently returns bad data is not fit for purpose. it is junk.
it is a dead drive.

> BTRFS "dup" and ZFS "copies=2" would cover almost all storage hardware
> issues that I've seen in the last 5+ years.

IMO, two copies of data on a drive you can't trust isn't significantly better
or more useful than one copy. It's roughly equivalent to making a photocopy
of your important documents and then putting both copies in the same soggy
cardboard box in a damp cellar.

If you want redundancy, use two or more drives. Store your important documents
in two or more different locations.

and backup regularly.


> > If a drive is failing, all the read or write re-tries kill performance on
> > a zpool, and that drive will eventually be evicted from the pool. Lose
> > enough drives, and your pool goes from "DEGRADED" to "FAILED", and your
> > data goes with it.
>
> So far I haven't seen that happen on my ZFS servers.  I have replaced at
> least 20 disks in zpools due to excessive checksum errors.

I've never had a pool go to FAILED state, either.  I've had pools go to
DEGRADED *lots* of times.  And almost every time it comes after massive
performance drops due to retries - which can be seen in the kernel
logs. Depending on the brand, you can also clearly hear the head re-seeking as
it tries again and again to read from the bad sector.

More importantly, it's not difficult or unlikely for a pool go from being
merely DEGRADED to FAILED.

A drive doesn't have to fail entirely for it be kicked out of the pool, and if
you have enough drives kicked out of a vdev or a pool (2 drives for mirror or
raidz-1, 3 for raidz-2, 4 for raidz-3), then that entire vdev is FAILED, not
just DEGRADED, and the entire pool will likely be FAILED(*) as a result.

That's what happens when there are not enough working drives in a vdev to
store the data that's supposed to be stored on it.

And the longer you wait to replace a dead/faulty drive, the more likely it
is that another drive will die while the pool is degraded.  Which is why
best practise is to replace the drive ASAP...and also why zfs and some other
raid/raid-like HW & SW support "spare" devices to automatically replace them.


(*) there are some pool layouts that are resistant (but not immune) to failing
- e.g. a mirror of any vdev with redundancy, such as a mirrored pair of raidz
vdevs. which is why RAID of any kind is not a substitute for backups.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: BTRFS and ZFS training

2020-01-26 Thread Craig Sanders via luv-main
On Fri, Jan 24, 2020 at 11:36:45PM +1100, Russell Coker wrote:
> https://doc.coker.com.au/training/btrfs-training-exercises/
> https://doc.coker.com.au/training/zfs-training-exercises/
>
> I've put some simple BTRFS and ZFS training exercises at the above URLs
> (this is the training that was done at a LUV Saturday meeting in 2015).  As
> they involve deliberately corrupting data to exercise the filesystems it's a
> good idea to to this on a VM so a typo won't cause problems.

Nice.

If you ever do an update of these, I'd suggest adding snapshot & send/receive
exercises because doing backups with snapshots is one of the best features of
both btrfs and zfs - better than rsync in a tiny fraction of the time (minutes
vs hours).  This would require two pools so add, e.g., /dev/vd[fgh] virtual
drives to the VM.  Or run two VMs to show remote send/receive via ssh.  100 or
200 MB each is enough for each pool drive.



BTW, I noticed two minor problems with the zfs instructions:

First is that you tell the reader to add the buster-updates contrib repo
to sources.list, then tell them to install spl-dkms and zfs-dkms from
buster-backports.

This should probably add the buster-backports repo instead:

deb http://mirror.internode.on.net/pub/debian/ buster-backports contrib main


The second is that spl-dkms is no longer needed with zfs 0.8 and above, it's a
dummy package.   buster-backports contains zfs 0.8.2

Getting rid of spl-dkms doesn't make it any faster to install & compile the
zfs module - most of what was in spl-dkms has been folded into zfs-dkms.

(on my threadripper 1950x box, it takes about a minute or a minute and a half
to compile the zfs module. on my older Phenom II and FX-8xx0 boxes, it takes
around 5 minutes)





BTW, zfs 0.8.3 just arrived in unstable/sid. the NEWS.Debian.gz file says:

zfs-linux (0.8.3-1) unstable; urgency=medium

  The SIMD acceleration has been fixed in this upstream release.
  Users could use the following two commands to double check:

$ cat /sys/module/zfs/parameters/zfs_vdev_raidz_impl
$ cat /sys/module/zcommon/parameters/zfs_fletcher_4_impl

  Another notable change is the parallelization of the KABI checks
  during the configure stage, which may significantly boost the
  postinst stage of zfs-dkms that builds the kernel module.

 -- Mo Zhou   Sat, 25 Jan 2020 13:26:02 +0800


I just upgraded my ztest VM to this today.  Compiled and rebooted
without problem.  Will upgrade my actual machines over the next few
days or so.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Rebuild after disk fail

2020-01-19 Thread Craig Sanders via luv-main
On Sun, Jan 19, 2020 at 05:38:23PM +1100, russ...@coker.com.au wrote:
> Generally I recommend using BTRFS for workstations and servers that have 2
> disks.  Use ZFS for big storage.

Unless you need to make regular backups from workstations or small servers to
a "big storage" ZFS backup server. In that case, use zfs so you can use 'zfs
send'.  Backups will be completed in a very small fraction of the time they'd
take with rsyncthe time difference is huge - minutes vs hours.  That's
fast enough to do them hourly or more frequently if needed, instead of daily.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Rebuild after disk fail

2020-01-19 Thread Craig Sanders via luv-main
On Sun, Jan 19, 2020 at 05:34:46PM +1100, russ...@coker.com.au wrote:
> I generally agree that RAID-1 is the way to go.  But if you can't do that
> then BTRFS "dup" and ZFS "copies=2" are good options, especially with SSD.

I don't see how that's the case, how it can help much (if at all). Making a
second copy of the data on the same drive that's failing doesn't add much
redundancy, but does add significantly to the drive's workload (increasing the
risk of failure).

It might be ok on a drive with only a few bad sectors or in conjunction with
some kind of RAID, but it's not a substitute for RAID.


> So far I have not seen a SSD entirely die, the worst I've seen is a SSD stop

I haven't either, but I've heard & read of it.  Andrew's rootfs SSD seems to
have died (or possibly just corrupted so badly it can't be mounted. i'm not
sure)

I've seen LOTS of HDDs die.  Even at home I've had dozens die on me over the
years - I've got multiple stacks of dead drives of various ages and sizes
cluttering up shelves (mostly waiting for me to need another fridge magnet or
shiny coffee-cup coaster :)

> I've also seen SSDs return corrupt data while claiming it to be good, but
> not in huge quantities.

That's one of the things that btrfs and zfs can detect...and correct if
there's any redundancy in the storage.

> For hard drives also I haven't seen a total failure (like stiction) for many
> years.  The worst hard drive problem I've seen was about 12,000 read errors,
> that sounds like a lot but is a very small portion of a 3TB disk and "dup"
> or "copies=2" should get most of your data back in that situation.

If a drive is failing, all the read or write re-tries kill performance on a
zpool, and that drive will eventually be evicted from the pool. Lose enough
drives, and your pool goes from "DEGRADED" to "FAILED", and your data goes
with it.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Rebuild after disk fail

2020-01-19 Thread Craig Sanders via luv-main
On Sun, Jan 19, 2020 at 04:48:30PM +1100, Andrew Greig wrote:
> here is the output of blkid
>
> /dev/sdb1: LABEL="Data" UUID="73f55e83-2038-4a0d-9c05-8f7e2e741517" 
> UUID_SUB="77fdea4e-3157-45af-bba4-7db8eb04ff08" TYPE="btrfs" 
> PARTUUID="d5d96658-01"
> /dev/sdc1: LABEL="Data" UUID="73f55e83-2038-4a0d-9c05-8f7e2e741517" 
> UUID_SUB="8ad739f7-675e-4aeb-ab27-299b34f6ace5" TYPE="btrfs" 
> PARTUUID="a1948e65-01"
>
> I tried the first UUID for sdc1 and the machine hung but gave me an
> opportunity to edit the fstab and reboot.

That should work. Are you sure you typed or copy-pasted the UUID correctly?
The fstab entry should look something like this:

UUID="73f55e83-2038-4a0d-9c05-8f7e2e741517" /data   btrfs   defaults
0   0

edit /etc/fstab so that it looks like that and then (as root) run "mount
/data".  If that works manually on the command line, it will work when the
machine reboots.

> When checking the UUID I discovered that the first entry for both drives
> were identical.

yes, that's normal. they're both members of the same btrfs array.

> Should I be using the SUB UUID for sdc1 for the entry in fstab?

No, you should use the UUID.



Alternatively, you could use ONE of the PARTUUID values. e.g. one of:

PARTUUID="d5d96658-01"  /data   btrfs   defaults0   0
PARTUUID="a1948e65-01"  /data   btrfs   defaults0   0

craig

PS: I just tested several variations on this on my btrfs testing VM.  UUID
works.  PARTUUID works. /etc/fstab does not support UUID_SUB (and it isn't
mentioned in `man fstab`).

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Rebuild after disk fail

2020-01-18 Thread Craig Sanders via luv-main
On Sat, Jan 18, 2020 at 11:06:50PM +1100, Andrew Greig wrote:
> Yes, the problem was my Motherboard would not handle enough disks, and we
> did Format sdc with btrfs and left the sdb alone so that btrfs could arrange
> things between them.
>
> I was hoping to get an understanding of how the RAID drives remembered the
> "Balance" command when the the whole of the root filesystem was replaced on
> a new SSD.

Your rootfs and your /data filesystem(*) are entirely separate.  Don't confuse
them.

The /data filesystem needed to be re-balanced when you added the second drive
(making it into a raid-1 array). 'btrfs balance' reads and rewrites all the
existing data on a btrfs filesystem so that it is distributed equally over all
drives in the array.  For RAID-1, that means mirroring all the data on the
first drive onto the second, so that there's a redundant copy of everything.

Your rootfs is only a single partition, it doesn't have a raid-1 mirror, so
re-balancing isn't necessary (and would do nothing).


BTW, there's nothing being "remembered". 'btrfs balance' just re-balances the
existing data over all drives in the array. It's a once-off operation that
runs to completion and then exits. All **NEW** data will be automatically
distributed across the array.  If you ever add another drive to the array, or
convert it to raid-0 (definitely NOT recommended), you'll need to re-balance
it again. until and unless that happens you don't need to even think about
re-balancing, it's no longer relevant.



(*) I think you had your btrfs raid array mounted at /data, but I may be
mis-remembering that.  To the best of my knowledge, you have two entirely
separate btrfs filesystems - one is the root filesystem, mounted as / (it also
has /home on it, which IIRC you have made a separate btrfs sub-volume for).
Anyway, it's a single-partition btrfs fs with no raid. The other is a 2 drive
btrfs fs using raid-1, which I think is mounted as /data.


> I thought that control would have rested with /etc/fstab.  How do the
> drives know to balance themselves, is there a command resident in sdc1?

/etc/fstab tells the system which filesystems to mount. It gets read at boot
time by the system start up scripts.


> My plan is to have auto backups, and given that my activity has seen an SSD
> go down in 12 months, maybe at 10 months I should build a new box, something
> which will handle 64Gb RAM and have a decent Open Source Graphics driver.
> And put the / on a pair of 1Tb SSDs.

That would be a very good idea.  Most modern motherboards will have more than
enough NVME and SATA slots for that (e.g. most Ryzen x570 motherboards have
2 or 3 NVME slots for extremely fast SSDs, plus 6 or 8 SATA ports for SATA
HDDs and SSDs. They also have enough RAM slots for 64GB DDR-4 RAM, and have at
least 2 or 3 PCI-e v4 slots - you'll use one for your graphics card).

2 SSDs for the rootfs including your home dir, and 2 HDDs for your /data bulk
storage filesystem.  And more than enough drive ports for future expansion if
you ever need it.


---

some info on nvme vs sata:

NVME SSDs are **much** faster then SATA SSDs.  SATA 3 is 6 Gbps (600 MBps), so
taking protocol overhead into account SATA drives max out at around 550 MBps.

NVME drives run at **up to** PCI-e bus speeds - with 4 lanes, that's a little
under 40 Gbps for PCIe v3 (approx 4000 MBps minus protocol overhead), double
that for PCIe v4.  That's the theoretical maximum speed, anyway. In practice,
most NVME SSDs run quite a bit slower than that, about 2 GBps - that's still
almost 4 times as fast as a SATA SSD.

Some brands and models (e.g. those from samsung and crucial) run at around
3200 to 3500 MBps, but they cost more (e.g. a 1TB Samsung 970 EVO PLUS
(MZ-V7S1T0BW) costs around $300, while the 1TB Kingston A2000 (SA2000M8/1000G)
costs around $160 but is only around 1800 MBps).

AFAIK there are no NVME drives that run at full PCI-e v4 speed (~8 GBps with
4 lanes) yet, it's still too new. That's not a problem, PCI-e is designed to
be backwards-compatible with earlier versions, so any current NVME drive will
work in pcie v4 slots.

NVME SSDs cost about the same as SATA SSDs of the same capacity so there's no
reason not to get them if your motherboard has NVME slots (which are pretty
much standard these days).


BTW, the socket that NVME drives plug into is called "M.2".  M.2 supports
both SATA & NVME protocols.  SATA M.2 runs at 6 Gbps.  NVME runs at PCI-e bus
speed. So you have to be careful when you buy to make sure you get an NVME M.2
drive and not a SATA drive in M.2 form-factor...some retailers will try to
exploit the confusion over this.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Rebuild after disk fail

2020-01-17 Thread Craig Sanders via luv-main
On Sat, Jan 18, 2020 at 02:14:46PM +1100, Andrew McGlashan wrote:
> Just some thoughts
>
> Way back, SSDs were expensive and less reliable than today.
>
> Given the cost of SSDs today, I would consider even RAIDING the SSDs.

If it's physically possible to install a second SSD of the same storage
capacity or larger then he absolutely should do so.  I vaguely recall
suggesting he should get a second SSD for the rootfs ages ago, but my
understanding / assumption was that there was only physical space and
connectors for one SSD in the machine.

The 'btrfs snapshot' + 'btrfs send' suggestion was just a way of regularly
backing up a single-drive btrfs filesystem onto his raid-1 btrfs array so that
little or nothing was lost in case of another drive failure. It's less than
ideal, but a LOT better than nothing.

I personally would never use anything less than RAID-1 (or equivalent, such
as a mirrored pair on zfs) for any storage. Which means, of course, that I'm
used to paying double for my storage capacity - i can't just buy one, I have
to buy a pair.  Not as a substitute for regular backups, but for convenience
when only one drive of a pair has died.

Drives die, and the time & inconvenience of dealing with that (and the lost
data) cost far more than the price of a second drive for raid-1/mirror.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Rebuild after disk fail

2020-01-17 Thread Craig Sanders via luv-main
On Sat, Jan 18, 2020 at 01:41:05PM +1100, Andrew Greig wrote:
> alg@andrewg:~$ sudo cat /etc/fstab
> [sudo] password for alg:
> # /etc/fstab: static file system information.
> #
> # Use 'blkid' to print the universally unique identifier for a
> # device; this may be used with UUID= as a more robust way to name devices
> # that works even if disks are added and removed. See fstab(5).
> #
> #           
> # / was on /dev/sda3 during installation
> UUID=2dfcd965-625b-47d5-a267-b02276320922 /   btrfs 
> defaults,subvol=@ 0   1
> # /home was on /dev/sda3 during installation
> UUID=2dfcd965-625b-47d5-a267-b02276320922 /home   btrfs 
> defaults,subvol=@home 0   2
> # swap was on /dev/sda2 during installation
> UUID=b2c6d1c4-4b94-4171-954e-9f5d56704514 none    swap sw 
>  0   0
> alg@andrewg:~$
>
> Are these two following commands  OK to apply to drives that were balanced
> previously and hold data?
>
>             sudo btrfs device add -f /dev/sdc1 /data
>
>             sudo btrfs ballance start -dconvert=raid1 -mconvert=raid1 /data
>

No, don't run any of those commands, especially the 'btrfs add' command - you
will destroy your existing data array if you run that.

Run blkid to list all attached block devices.  figure out which one of them
is your data array and add an entry

if you can't figure out which is the correct one, reply and include blkid's
output.


> and will issuing those commands write that into fstab?

no.

craig

--
craig sanders 

BOFH excuse #376:

Budget cuts forced us to sell all the power cords for the servers.
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Rebuild after disk fail

2020-01-17 Thread Craig Sanders via luv-main
On Sat, Jan 18, 2020 at 01:20:39PM +1100, pushin.linux wrote:
> I have elected to start with a Ubuntu 18.04 LTS desktop install.The Raid
> drives were picked up, ie are available, but does the balance command need
> to be issued again?

You only need to run 'btrfs balance' when you're changing the number and/or
size of drives (or partitions) in the btrfs array. The command re-balances all
of the data on the array, roughly-equally across all the drives.

So, if you're not adding drives to the array, you don't need to re-balance it.

(btw, 'btrfs balance' is the one feature that btrfs has that I wish zfs had)

> I had two lines to set up the raid and balance them at the start.

IIRC, I think I advised you to do something like: 1. create a degraded btrfs
array with just one of the drives; 2. copy your data to it; 3. add another drive
to the btrfs array with 'btrfs add'; 4. re-balance the data so that it's on both
drives with 'btrfs balance'.

If so, that'll be why you have two commands written down.

> I suspect that without those commands only one drive will be written to.

nope.

This time around, your btrfs array for /data ALREADY EXISTS, so you don't have
to do any of that.

And you certainly SHOULD NOT run mkfs.btrfs, that would erase your current
btrfs array and re-format it.

All you need to do this time is add an entry to /etc/fstab so that it mounts
correctly on boot.   Something like the following:

UUID="c0483385-ca6f-abb3-aeeb-94793439a637"  /databtrfs 
   defaults,relatime 0 0

run 'blkid' to find the correct uuid for your /data fs and use it instead
of the bogus one in the example above.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Rebuild after disk fail

2020-01-17 Thread Craig Sanders via luv-main
On Fri, Jan 17, 2020 at 11:36:29AM +1100, Andrew Greig wrote:
> I recently experienced an SSD failure, and so I have purchased another to
> set up my system again. I received some substantial help from this list
> early in 2019 to build my machine with this SSD as / and /home under Ubuntu
> 18.04 with two x 2Tb conventional drives in RAID for storing my work, all
> are running btrfs.

You lost your home dir and the data in it when your SSD failed Because your
rootfs and /home on the SSD doesn't have any redundancy (i.e. it was a single
partition, with no RAID).  I strongly recommend setting up a cron job to
regularly snapshot it (at least once/day) and do a 'btrfs send' of that
snapshot to a sub-volume of your /data filesystem.

That way you won't lose much data from that partition if your SSD dies again
- you can retrieve it from the last snapshot backup, and will only lose any
changes since then.

If your / and /home are on separate partitions (or btrfs sub-volumes) you will
need to do this for both of them.

(if you weren't running btrfs on /, you could do this with rsync instead of
'btrfs send', but rsync would be a lot slower)


IME, drives are fragile and prone to failure. It's always best to make plans
and backup procedures so that WHEN (not IF) a drive fails, you don't lose
anything important...or, at least, minimise your losses.

Also, remember that RAID is not a substitute for backup so you should
regularly backup your /data filesystem to tape or other drives. Ideally,
you should try to have an off-site backup in case of fire/flood/etc (e.g.
backup to an external USB drive and store it at your office, lawyer's safe, a
friend's house or somewhere. Have at least two of these so you can rotate the
offsite backups).




> After the machine was running I was asked if I had set up the machine using
> Ubuntu Server, I hadn't, because at that time I didn't see those options.
>
> I am thinking, then, for this build, perhaps I should set it up using Ubuntu
> Server. I will need to get my system to recognise the RAID drives as well.

If the installer doesn't automatically detect your /data btrfs filesystem and
add it to /etc/fstab, it's easy enough to add it yourself.

> So before I jump in the deep end again, are there any "gotchas" of which I
> should be aware.
>
> Will the server version make life more reliable?

the only significant difference between the server and desktop versions of
ubuntu are the packages which are installed by default. e.g. the desktop
version installs a whole bunch of desktop stuff (X, desktop environment and
GUI apps, etc) that the server version doesn't. Otherwise, they're the same -
same kernel, same libc and other standard system libraries, etc.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Weird boot issue

2020-01-12 Thread Craig Sanders via luv-main
On Sun, Jan 12, 2020 at 04:20:42PM +1100, pushin.linux wrote:
> 
>  Original message From: Craig Sanders via luv-main 
>  Date: 11/1/20  2:12 pm  (GMT+10:00) To: 
> luv-main@luv.asn.au Subject: Re: Weird boot issue On Fri, Jan 10, 2020 at 
> 05:01:57PM +1100, pushin.linux wrote:> Hi all,I am in a great dilemma, when 
> rebooting after agreeing to a> software update, did not check the packges 
> involved, but running Ubuntu> 18.04, I was confronted with this:>> GRUB 
> version 2.02> Minimal BASH-like line editing is supported.>> For the first 
> word, TAB lists possible command completions. Anywhere else> TAB lists 
> possible device or file completions.>> grub> _  ( flashing cursor)> Don't 
> know what to do here.There's an old (2014) but still pretty good tutorial 
> here:https://www.linux.com/tutorials/how-rescue-non-booting-grub-2-linux/see 
> also:https://help.ubuntu.com/community/Grub2/TroubleshootingAlternatively, 
> you could boot with a rescue CD or USB stick, and examine thedrives and 
> partitions with fdisk or gdisk or similar in order to identify theroot 
> filesystem (and the /boot filesystem if it's separate from the root fs).Then 
> mount the root fs as, say, "/mnt", /boot as "/mnt/boot" (if separate),and 
> optionally other filesystems (you probably won't need other partitionsmounted 
> for the rescue operation).Then bind-mount the pseudo-filesystems /dev, /proc, 
> and /sys under /mnt.These are needed for update-grub to work.  Here's a quick 
> way to do that:    for i in proc dev sys ; do mount -o bind "/$i" "/mnt/$i" ; 
> doneRun "chroot /mnt" to get a shell where /mnt is treated as the root 
> fs.Then you can edit your /etc/default/grub (if needed - e.g. make sure that 
> theline "GRUB_DISABLE_LINUX_UUID=true" is commented out), and run 
> "update-grub".Finally, exit the chroot, unmount the partitions you mounted 
> (in roughlyreverse order to the order you mounted them - e.g. "umount 
> /mnt/dev" beforetrying to "umount /mnt"), and reboot.  something like this 
> should work:    for i in proc dev sys boot / ; do umount "/mnt/$i" ; doneDue 
> to paranoia and ancient & long-obsolete habit, I like to run "sync" 
> tomanually flush all buffers to disk before rebooting. e.g. "sync ; shutdown 
> -rnow".  It's harmless and useless when doing a clean reboot with 
> "shutdown",but may prevent or reduce fs corruption if you have to do a forced 
> reboot(e.g. with "reboot -f") or power-cycle.> I am running an ssd as sda, 
> sdb and sdc are in a btrfs RAID array. Puzzled> by this behaviour.I don't 
> know if this is related to your boot problem or not, but this is thewrong way 
> to think about drives in linux.Drives MAY (and often do) retain the same 
> device names (/dev/sda, /dev/sdb,/dev/sdc etc) when you reboot a machine, but 
> you can NOT rely on that beingthe case.These device names are NOT guaranteed 
> to survive across reboots. In fact, itis has been documented for many years 
> that these device names are not constantor reliable.Storage devices may get 
> different device names for a variety of reasons,including (but not limited 
> to): drives (including USB drives) being added orremoved, new kernel version, 
> module load order, minor timing differences fordrives spinning up or being 
> detected by the kernel, and more.This is why you should always configure grub 
> and /etc/fstab to use UUIDs orLABELs rather than hard-coding /dev/sdXn device 
> names.craig--craig sanders 
> ___luv-main 
> mailing 
> listluv-m...@luv.asn.auhttps://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-mainHi
>  Craig,I am wading through the first option "how to rescue..."I have found 
> rootgrub> ls (hd0,3)/@/  @/homeand ran this ...ls (hd0,3)/@/dev/  run/  boot/ 
>  a whole lot of stuff ... inc vmlinuz  and home/ and root/grub> ls @/boot/# 
> whole lot of stuff including vmlinuz-4.15.0-74-genericI cant seem to get the 
> syntax right in line 2  for the location of the kernelLine one worked OK, but 
> should I be using root=/dev/sda3 ?I am writing this with one finger on a 
> tablet.


I can't read that mess. i'm not even going to try deciphering it.

Also, this should be sent to the mailing list, not to me personally. so I'm
replying back to the list.

craig

___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Weird boot issue

2020-01-11 Thread Craig Sanders via luv-main
On Sat, Jan 11, 2020 at 10:02:24PM +1100, Andrew Greig wrote:
> Another thought. Is there a way to command a system upgrade from Ubuntu
> 18.04 to 19.10 , or download the DVD and

Well, yes. Ubuntu is meant to be upgraded in place. As long as you have a CD,
DVD, or internet connection, you can upgrade from one version of ubuntu to the
next.  i.e. from 18.04 to 18.10, then 19.04 and then 19.10

Canonical put a lot of effort into making sure that upgrading to the next
version of Ubuntu works.  They put none at all into making sure that skipping
a release or two will work - they do not recommend or support that at all.

If you're willing and able to deal with and fix a variety of minor issues and
possibly some major ones too, it is possible to upgrade directly from 18.04
to 19.10 but that is a) not the recommended procedure (in fact it is very
strongly NOT recommended), and b) not guaranteed to work smoothly (or at all).
Someone with a LOT of experience with debian or ubuntu should be able to do
itanyone else should follow the recommended upgrade procedure.

> install instead of repair?

I would never recommend that an ubuntu (or debian. or mint. etc) be
re-installed rather than upgraded/repaired UNLESS the system had been
completely trashed or the system had been compromised (in which case, you
can't trust ANY executable or config file on the system).

In either of these cases, make a backup of your DATA if you don't already have
one, install the system from scratch, and then restore your data.

If you also have backups of config files in /etc or wherever, you can use them
as guidelines to recreate new config files (if the system had been compromised)
or just copy them (if it hadn't been compromised).


> I had been thinking of moving up a release anyway. Just that S**t happened
> before I could. Would it pick up the Raid under BTRFS?

as long as the entry in /etc/fstab is good, it should.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Weird boot issue

2020-01-11 Thread Craig Sanders via luv-main
On Sat, Jan 11, 2020 at 03:44:21PM +1100, Andrew Greig wrote:
> As it turns out the system does reference the drives by UUIDs.
>
> So, my question is, "by fixing grub 2.8 will my RAID under BTRFS just work?
> Or will I have to run something to get it working as well?"

if you're not booting off the btrfs drives, grub doesn't need to know or
care that they exist. All it needs to know about is where and what the root
filesystem and /boot directory/filesystem are - i.e. what drive and partition
and what kind of fs (e.g. ext2/3/4, xfs, btrfs, etc).

The btrfs filesystem will be mounted AFTER the kernel has booted, mounted the
the rootfs, and started the init process (probably systemd for newer systems,
maybe sysvinit for older systems).  It will use the contents of /etc/fstab to
do that - and that's where it's important to use UUID or LABEL rather than
hard-coding /dev/sdXn device names.

In short as long as the /etc/fstab entry for your btrfs mount is correct, then
it will work.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Weird boot issue

2020-01-10 Thread Craig Sanders via luv-main
On Fri, Jan 10, 2020 at 05:01:57PM +1100, pushin.linux wrote:
> Hi all,I am in a great dilemma, when rebooting after agreeing to a
> software update, did not check the packges involved, but running Ubuntu
> 18.04, I was confronted with this:
>
> GRUB version 2.02
> Minimal BASH-like line editing is supported.
>
> For the first word, TAB lists possible command completions. Anywhere else
> TAB lists possible device or file completions.
>
> grub> _  ( flashing cursor)
> Don't know what to do here.

There's an old (2014) but still pretty good tutorial here:

https://www.linux.com/tutorials/how-rescue-non-booting-grub-2-linux/

see also:

https://help.ubuntu.com/community/Grub2/Troubleshooting


Alternatively, you could boot with a rescue CD or USB stick, and examine the
drives and partitions with fdisk or gdisk or similar in order to identify the
root filesystem (and the /boot filesystem if it's separate from the root fs).

Then mount the root fs as, say, "/mnt", /boot as "/mnt/boot" (if separate),
and optionally other filesystems (you probably won't need other partitions
mounted for the rescue operation).

Then bind-mount the pseudo-filesystems /dev, /proc, and /sys under /mnt.
These are needed for update-grub to work.  Here's a quick way to do that:

for i in proc dev sys ; do mount -o bind "/$i" "/mnt/$i" ; done

Run "chroot /mnt" to get a shell where /mnt is treated as the root fs.

Then you can edit your /etc/default/grub (if needed - e.g. make sure that the
line "GRUB_DISABLE_LINUX_UUID=true" is commented out), and run "update-grub".

Finally, exit the chroot, unmount the partitions you mounted (in roughly
reverse order to the order you mounted them - e.g. "umount /mnt/dev" before
trying to "umount /mnt"), and reboot.  something like this should work:

for i in proc dev sys boot / ; do umount "/mnt/$i" ; done

Due to paranoia and ancient & long-obsolete habit, I like to run "sync" to
manually flush all buffers to disk before rebooting. e.g. "sync ; shutdown -r
now".  It's harmless and useless when doing a clean reboot with "shutdown",
but may prevent or reduce fs corruption if you have to do a forced reboot
(e.g. with "reboot -f") or power-cycle.

> I am running an ssd as sda, sdb and sdc are in a btrfs RAID array. Puzzled
> by this behaviour.

I don't know if this is related to your boot problem or not, but this is the
wrong way to think about drives in linux.

Drives MAY (and often do) retain the same device names (/dev/sda, /dev/sdb,
/dev/sdc etc) when you reboot a machine, but you can NOT rely on that being
the case.

These device names are NOT guaranteed to survive across reboots. In fact, it
is has been documented for many years that these device names are not constant
or reliable.

Storage devices may get different device names for a variety of reasons,
including (but not limited to): drives (including USB drives) being added or
removed, new kernel version, module load order, minor timing differences for
drives spinning up or being detected by the kernel, and more.

This is why you should always configure grub and /etc/fstab to use UUIDs or
LABELs rather than hard-coding /dev/sdXn device names.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Process Substitution

2020-01-06 Thread Craig Sanders via luv-main
On Mon, Jan 06, 2020 at 01:30:40PM +1100, Rodney Brown wrote:
> https://www.spinellis.gr/blog/20191214/
> Convert file I/O into pipe I/O with /dev/fd
>
> Some Unix commands read data from files or write data to files, without
> offering an obvious way to use them as part of a pipeline. How can you
> write a program to interact with such a command in a streaming fashion?
>
> This would allow your program and the command run concurrently, without the
> storage and I/O overhead of a temporary file. You could create and use a
> named pipe, but this is a clunky solution, requiring you to create and
> destroy a unique underlying file name. Here's a better approach.
>
> Modern Unix systems offer a virtual device directory named /dev/fd/. The
> files in it correspond to the file descriptors of each process. ...

This is known as Process Substitution (because it substitutes a filename arg
with a process - which can be any command or list or pipeline of commands
that either output to stdout or consume stdin).  Several shells have built-in
support for it, including bash, some versions of ksh, and zsh.

Note that Process Substitution is not available when bash is run in POSIX
mode - e.g. when /bin/sh is a symlink to bash and your script uses #!/bin/sh
instead of #!/bin/bash, or when POSIXLY_CORRECT is set, or if you use 'set -o
posix', etc.

See also:

https://en.wikipedia.org/wiki/Process_substitution
http://www.tldp.org/LDP/abs/html/process-sub.html
https://mywiki.wooledge.org/ProcessSubstitution

The wikipedia link above has a very nice, short and clear explanation of what
it is, why it's useful, and how to use it.

There are also over 100 questions (with answers) about Process Substitution on
U with lots of good examples and explanations:

https://unix.stackexchange.com/questions/tagged/process-substitution

This Q is a pretty good one:

https://unix.stackexchange.com/questions/17107/process-substitution-and-pipe


Finally, it's documented in the bash man page:

  Process Substitution

Process substitution allows a process's input or output to be referred to
using a filename.

It takes the form of <(list) or >(list).  The process list is run
asynchronously, and its input or output appears as a filename.

This filename is passed as an argument to the current command as the
result of the expansion.  If the >(list) form is used, writing to the file
will provide input for list.  If the <(list) form is used, the file passed
as an argument should be read to obtain the output of list.

Process substitution is supported on systems that support named pipes
(FIFOs) or the /dev/fd method of naming open files.

When available, process substitution is performed simultaneously with
parameter and variable expansion, command substitution, and arithmetic
expansion.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Bash Recursive File Rename

2019-11-14 Thread Craig Sanders via luv-main
On Fri, Nov 15, 2019 at 11:51:40AM +1000, Piers wrote:
> I have a bunch of files that I want to rename:
>
> 123.someword.doc > 123.doc
>
> 456.someword.pdf > 456.pdf
>
> The "someword" is consistent in all the files and they need to be renamed
> recursively.

Use the perl-based 'rename' tool (which is sometimes in the PATH as 'prename'
or 'file-rename' instead of just 'rename').

Note, this is **NOT** the same as the rename tool from util-linux (which has
completely different command-line options and capabilities, and is sometimes
called 'rename.ul').

On debian (and ubuntu, etc), 'apt-get install rename' if it isn't already
installed (run 'man rename' to see if it mentions Perl). On other distros,
install whatever package provides the perl File::Rename module.


Anyway, the perl 'rename' allows you to use ANY perl code, from trivial
sed-like search and replace to complex perl code blocks to rename files. e.g.
i've written rename commands that sort files into sub-directories by the
file's timestamp or name, convert them to TitleCase, change spaces to '_' or
'.' (or just remove them), AND change their permissions - all in one command.

But most of the time, I just need to do a regexp search and replace on the
filenames.  Like this:

rename -n 's/\.someword//' *someword*

NOTE: The '-n' is a dry-run, it shows you what would be renamed if you allowed
it.  To actually rename, get rid of the '-n' or replace it with '-v' for
verbose output.  BTW, rename won't overwrite existing files unless you force
it to with '-f'.


Perl rename can take a list of filenames on the command-line or from stdin, so 
you
could do a recursive rename with either of these:

find . -type f -name '*someword*' -exec rename -n 's/\.someword//' {} +

find . -type f -name '*someword*' -print0 | rename -n -0 's/\.someword//'

> Something like this but with a different regex:
>
> # This is for a completely different file name structure
>
> find . -name '123*.txt' -type f -exec bash -c 'mv "$1" "${1/\/123_//}"' -- {} 
> \;

You really don't want to be forking 'bash' AND 'mv' once for each matching file.

The following line tries to fit as many filenames on the command line as
will fit (the command line length limit is typically around 2MB these days),
so will almost always only run bash once. or maybe twice. as few times as
necessary, anyway.

   find . -name '123*.txt' -type f -exec bash -c 'for f in "$@"; do mv "$f" 
"${1/\/123_//}"; done' -- {} +

Note the use of '{} +' instead of '{} \;' - that's what makes 'find' try to
fit as many filenames onto a command-line as possible each time the -exec
option's command is executed.

Unfortunately, this still forks 'mv' once for each filename, so it's still going
to be slow.


Even better, don't use bash at all, use the right tool for the job:

find . -type f -name '123*.txt' -exec rename -n 's:/123_::' {} +

Or, to pipe a NUL separated list of filenames (to avoid command-line length
limits entirely):

find . -type f -name '123*.txt' -print0 | rename -n -0 's:/123_:/:'

(again, remove the '-n' or replace with '-v' to make it actually rename files
once you've verified that it does what you want).

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Renaming files

2019-10-09 Thread Craig Sanders via luv-main
On Wed, Oct 09, 2019 at 08:54:54PM +1100, Andrew McGlashan wrote:
> > xargs -0r mv -T /destination/ --

> The "mv -T /destination/" ... that doesn't seem to make sense to me...?
>
>   This from mv man page:
>
>-T, --no-target-directory
>   treat DEST as a normal file

typo. i meant "-t", not "-T".

   -t, --target-directory=DIRECTORY
  move all SOURCE arguments into DIRECTORY

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Renaming files

2019-10-08 Thread Craig Sanders via luv-main
On Wed, Oct 09, 2019 at 03:46:20PM +1100, Craig Sanders wrote:
> BTW, you could then pipe the output of the above pipeline into xargs to do
> something with the filename(s) matched. e.g. to move the matching file to
> another directory:
>
>   xargs -0r mv -T /destination/

Actually, that should be:

   xargs -0r mv -T /destination/ --

The "--" prevents any filenames from being misinterpreted as options, in case
any of them begin with a "-".

This, for example, prevents catastrophes like:

touch "-rf"
rm *

or:

find  -print0 | xargs -0r rm

Which brings up one of the main reasons for using NUL separators when dealing
with filenames - just like using "--", it is good, defensive-programming
best-practice.  It prevents the problems that would happen if you forgot or
didn't realise that there are files with annoying filenames.

The directory you're working with right now might not have any such filenames,
but it's still a good habit to write your one-liners and scripts defensively
because you will probably re-use it later.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Renaming files

2019-10-08 Thread Craig Sanders via luv-main
On Wed, Oct 09, 2019 at 02:15:44AM +1100, Andrew McGlashan wrote:
> On 8/10/19 9:33 am, Craig Sanders via luv-main wrote:
> > Either with 'find ... -exec' or, if you need to process find's list
> > of filenames (with grep or sed or something) make sure you use NUL
> > separated output and tools that can handle NUL-separated input
> > (e.g. 'find ... -print0 | grep -z ... | head -z -n 10 | xargs
> > -0r')
>
> find badly needs an option for sorting the output via timestamps.

Yes, that would be useful.  Not essential, though, as there are other
ways to do it.


> Want the last version of a file:
>   ls -rt|tail -1
>
> How do you do that with find?

Similar to how you'd do it with stat: with a printf format string. i.e. use
find's -printf option to output the file's change time (in seconds since the
epoch), a TAB, the filename, and a NUL separator.

find /path/ -type f -printf '%C@\t%p\0' |
  sort -z -k1,1 -r -n |
  head -z -n 1 |
  cut -z -f2 |

sort is then used to sort find's output by timestamp (reverse numeric
sort), then head to get only the first match, and cut to get rid of the
no-longer-needed timestamp.

or you could pipe it into awk or perl or whatever instead of any or all of the
sort, head, and/or cut commands.


see "man find" and search for printf for other file data that can be printed.

BTW, you could then pipe the output of the above pipeline into xargs to do
something with the filename(s) matched. e.g. to move the matching file to
another directory:

  xargs -0r mv -T /destination/


Unlike ls, this will not break if any filename contains newlines or other
annoying but valid characters.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: rename vs rename -- was Re: Renaming files

2019-10-07 Thread Craig Sanders via luv-main
On Tue, Oct 08, 2019 at 09:28:05AM +1100, Duncan Roe wrote:
> Under Slackware:
>
> > 09:03:07$ file $(type -p rename)
> > /usr/bin/rename: ELF 64-bit LSB executable, x86-64, version 1 (SYSV), 
> > dynamically linked, interpreter /lib64/ld-linux-x86-64.so.2, stripped

Dunno if slackware has it packaged, but the perl-based rename command is
available from https://metacpan.org/release/File-Rename

> Checked my Ubuntu VM (debian-based): no rename command.

On Ubuntu, "apt-get install rename".

Or, for very old versions of ubuntu (<= trusty 14.04):

apt-get install libfile-rename-perl


For details, see  
https://askubuntu.com/questions/956010/whats-the-difference-between-the-different-rename-commands

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Renaming files

2019-10-07 Thread Craig Sanders via luv-main
On Tue, Oct 08, 2019 at 12:47:15AM +1100, Andrew McGlashan wrote:
> I've been "reprimanded" in the past for doing something like
>
>for filex in $(ls 6H9*)
>...
>
>
> Everyone says, don't use "ls", it isn't needed.

It's true that ls isn't needed, but the real problem is that parsing the
output of ls is unreliable and potentially dangerous. And it can't deal with
filenames which have completely valid characters like spaces, tabs, newlines,
and shell meta-characters.

The **ONLY** characters that are not valid in a unix filename are
forward-slash and NUL.  **ALL** other characters are valid.  If you write your
scripts without taking that into account then your scripts are broken.

The output of ls should never be used for anything other than viewing in a
terminal.


See 
https://unix.stackexchange.com/questions/128985/why-not-parse-ls-and-what-to-do-instead

In short, use shell globbing (aka "wildcard" characters).  If globs can't do
what you want, use find rather than ls.

Either with 'find ... -exec' or, if you need to process find's list of
filenames (with grep or sed or something) make sure you use NUL separated
output and tools that can handle NUL-separated input (e.g. 'find ... -print0 |
grep -z ... | head -z -n 10 | xargs -0r')

Most GNU tools these days have a '-z' option for that. some others do too.
and perl has '-0', as does 'xargs'.  With awk you can set the input (and/or
output) record separator with RS="\0" (or ORS="\0").

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Renaming files

2019-10-07 Thread Craig Sanders via luv-main
On Mon, Oct 07, 2019 at 07:46:50PM +1100, Andrew Greig wrote:
> Rename 6H9A0001.CR2 6J9A0001.CR2
> Rename 6H9A0002.CR2 6J9A0002.CR2
> Rename 6H9A0003.CR2 6J9A0003.CR2
> Rename 6H9A0004.CR2 6J9A0004.CR2
> to
> Rename 6H9A0085.CR2 6J9A0085.CR2

Use the perl rename utility (aka prename or file-rename), **NOT**
the rename aka rename.ul util in the util-linux package, which has
completely different command-line options and capabilities.

e.g.

rename -n 's/6H/6J/' *.CR2

The '-n' makes it a dry-run, showing what it would change if you let it.
To actually rename files, remove the '-n' or change it to '-v' for
verbose output.


BTW, perl rename is far more capable than trivial sed-like renames like this.
any perl code that modifies $_ can be used as the rename script.


If you're running a recent debian or similar, 'apt-get install rename' to
make sure you have the perl rename instead of rename.ul (on older versions
of debian, package was called libfile-rename-perl).

It's packaged for other distros too.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: modern anti-spam

2019-10-05 Thread Craig Sanders via luv-main
On Tue, Oct 01, 2019 at 04:22:48PM +1000, Russell Coker wrote:
> I currently have SpamAssassin set to reject anything over 5.0, but I'm still
> getting a lot of spam.
>
> Are there any other good options?  I haven't tweaked SA much, just used mostly
> the default Debian settings with a few whitelist entries (which are not
> responsible for the spam).  It could be that I'm not running SA properly, if
> anyone has some quick tips on how to optimise it then I'm interested.  I'm not
> so interested in detailed configuration changes because I don't want to go 
> down
> the path of SA tweaking and because the server I run has lots of people from
> different regions and some use of multiple languages so I can't just tweak it
> for me personally.  I guess I could try the Bayes stuff, does that give much 
> of
> a benefit and how easy is it to setup?

I run spamassassin and my own custom postfix junk maps and header_checks
rules.  I wrote scripts to generate custom SA rules from the same domains that
go into the junk maps.  I don't bother with body_checks rules much these days
because most spammers figured out base64 encoding years ago (so i catch that
crap with SA).

Every few weeks or months I trawl through my spamassassin quaratine mbox to
find new stuff for the header_checks rules. and also to look for the rare
false positive.

fail2ban monitors my mail logs and blocks repeated failures.  I also have
a bogus secondary MX (because secondary MXs are preferred by spammers -
they think it's a way to get past more stringent anti-spam rules on primary
MXs). It's just another postfix smtpd on my mail server host, listening on a
different IP address.  It soft-rejects **everything**. fail2ban monitors the
log for this too.

I've also got some nice f2b-loop rules that ban repeat offenders for longer
and longer times (the more often an IP address is banned for any reason, the
sooner it ends up in the permanent all-ports f2b-loop5 chain).

I also use various RBLs, including country code RBLs that block mail based on
the country of origin. Since it's my own **personal** mail server, it's safe
to do that - I don't know anyone in China or Russia or any of the other common
spam-source countries.  Not safe to do on a shared or work mail server.

Similarly, I reject all mail from the TLDs that seem to be populated solely or
almost-entirely by spammers - .biz, .tech, .info, .loan and many more.



The downside is that it's a fair amount of work to maintain, even with all the
scripts i've written to semi-automate it (mostly reading spam and copy-pasting
spammer domains and spammy phrases and coming up with good regexps).  An hour
or two per week or sometimes more.

But a) i've been doing this since the mid-1990s, so it's easy and b) I'd
rather do this than trust my mail to google or some other corporation.


BTW, here's a useful header checks rule (rejects mail from Mr, Mrs, Miss Dr, 
Eng.
etc):

# Mr/Mrs/Ms/Miss/Dr/etc 419-spam.
/^(?:Subject|From):\s*"?(?:From\s*)?(?:[DM]rs?\.?\b|Ms\.?|Miss|Duke|CEO|Barrister|AGENT|Eng\.|Engineer)/
   REJECT HCP_105

I don't think I've **ever** seen a legitimate mail that included such a title
in the From: or Subject: line - it's all 419 spam.

The "HCP_105" is a code to help me match log entries with header_checks
rules in case I spot a false positive in the logs.  I'm currently up to
HCP_136. That's about 164 rules because some of them have suffixes like
HCP_135a, or HCP_135b because I group them by "theme" (nigerian 419 scams,
pill spam, porn spam, religious nutter spam, loyalty card scams, bogus hacker
bitcoin blackmail spams, etc) and also split them into multiple rules when
they get too long.



> At this stage I'm even considering challenge-response.  I figure if
> everything below a SA score of 3.0 was let through, everyone who I've ever
> sent mail to or who I've received good mail from was let through, and
> everything with a SA

Don't do challenge-response.

What it does is offload YOUR spam problem onto the people whose email
addresses have been forged by a spammer, making it THEIR problem.

It's obnoxious.  It's just another form of backscatter-spam.

And you need personal motivation rather than just a wish to be a good net
citizen, it will get your server onto backscatter RBLs.

craig

ps: many years ago I wrote some procmail rules that look for Challenge
Response messages and automatically fetches the URL embedded in the message
with curl.

e.g.

# yes, auto-confirming your spam is a free service i'm happy to provide.
# thank you for off-loading your spam onto me.
:0
* ^Subject: Please confirm your message
| lynx -dump -nonumbers -listonly -stdin | grep 'tmda\.cgi' | xargs -d'\n' -n1 
curl >/dev/null 2>&1

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Talks?

2019-06-25 Thread Craig Sanders via luv-main
On Sat, Jun 22, 2019 at 09:01:28PM +1000, Russell Coker wrote:
> We are currently having problems where mail Andrew sends to luv-main gets
> blocked by localhost.
>
> # postconf -d|grep mynet
> mynetworks = 127.0.0.0/8 10.10.10.0/24 [::1]/128 [2a01:4f8:140:71f5::]/64
> [fe80::]/64
>
> Below are the relevant log entries.  It seems that ::1 is not being accepted
> as an exclusion for spam checks, from the above you can see that ::1 is in
> mynetworks and from the attached main.cf you can see that permit_mynetworks is
> before other checks.  Any ideas as to what the problem might be and why it
> only seems to affect Andrew's mail?
>
> To clarify, what happens is that outbound mail from the list server is sent to
> localhost and the Postfix instance on localhost is rejecting it.
>
> >From the attached master.cf you can see that localhost is excluded from
> SpamAssassin and ClamAV checks.

> Jun 18 16:21:47 itmustbe postfix/cleanup[23587]: CADE6B0AD: reject: header
> From: achalmers--- via luv-main  from localhost[::1];
> from= to= proto=ESMTP
> helo=: 5.7.1 550 Message rejected Mail from a likely spam
> domain 10002

That's not a standard postfix rejection message, so it's coming from something
else - perhaps one of your smtpd milters?

googling for the error message text (in quotes) doesn't come up with anything,
so it doesn't seem to be a common error message - it's probably a custom rule.
try grepping for ""Mail from a likely spam domain" in your postfix & milter
etc config files..

What domain is the mail coming from? has that domain somehow got itself onto
an RBL?



BTW, the fact that the message even gets a postfix queue id means that postfix
has, at some stage, accepted the message. messages rejected during the initial
smtpd session get tagged with NOQUEUE in the logs instead of a queue id. which
means that it's unlikely to have anything to do with the ::1 address.

Try examining the entire chain of events for a single message - i.e. grep for
the postfix queue ID, e.g. 'grep CADE6B0AD: /var/log/mail.log'.

This may show other related IDs that need to be grepped for if the message is
passed to an external filter and then back into postfix (this is pretty normal
on my postfix box, because I use amavisd as a content_filter, but I don't know
if you'd see the same using a milter).

(i wrote a perl script years ago to do a two-pass search for mail.log entries.
give it a search regexp such as an email address and it'll find all the queue
ids in the log matching that, then it'll grep for those queue ids in the log.
you can find it at http://taz.net.au/postfix/scripts/mailgrep.pl. try it with
something like: 'mailgrep.pl -s "from=
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: linux heart rate monitor / activity tracker?

2019-04-05 Thread Craig Sanders via luv-main
On Sat, Apr 06, 2019 at 12:36:31PM +1100, Stephen wrote:
> Hi Craig,
>
> Not sure what sports you're targeting, it can make a difference to apps.

this is not at all related to sports or fitness training - it's entirely about
monitoring heart rate and related biological data, health informatics and
bio-feedback. there's obviously some crossover with sporting/training stuff,
but that's not my interest or focus.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


linux heart rate monitor / activity tracker?

2019-04-05 Thread Craig Sanders via luv-main
I'm looking for both wearable hardware and software for heart rate monitoring
on linux. On wrist or chest strap is fine.

Heart rate monitoring is essential. Oxygen saturation and ECG functionality
would be nice too.

I've got a better idea of what I **DON'T** want than what I do.  I don't want
anything that requires sync to a corporate server, I want to BUY a product,
not BE one (optional syncing is fine, I can choose not to use it).  I just
want something that gathers data and either transmits it in real time to my
bluetooth-paired desktop machine and/or android[1] device, or logs the data
for later download.

I'll also need software to display and/or analyse the data.



I've seen lots of cheap devices on ebay for $10-$20 but have no idea if they
can be made to work with linux or if they use their own proprietary protocols
and only work with their spyware app.

I'd prefer to avoid brand name equipment, because I see no good reason to pay
$300 or $400 for hardware not significantly better than the stuff that sells
for under $20 - especially when I explicitly do not want any "value added"
cloud services they provide, services that get promoted as features but I see
as spying malware.


in short what i'm after is:

 * cheap
 * open source data acquisition, visualisation, & analysis
 * open data formats
 * no mandatory spyware service


Any clues or pointers to relevant hardware or software would be appreciated.


[1] i've found https://github.com/Freeyourgadget/Gadgetbridge (which is
availabled on the F-Droid app store) so something compatible with that would
be good.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


(fwd) Re: Proposed road map for 1 SSD & 2 HDDs set up in btrfs as RAID1 -- success!!

2019-02-25 Thread Craig Sanders via luv-main
- Forwarded message from Andrew Greig  -

Date: Mon, 25 Feb 2019 21:50:56 +1100
From: Andrew Greig 
To: Craig Sanders 
Subject: Re: Proposed road map for 1 SSD & 2 HDDs set up in btrfs as RAID1 -- 
success!!

Hi all,

I have a working GUI and and a working btrfs RAID1, data is loading at present
but it is a lot slower as the feed is from USB3 to a SATA HDD. But I can work
with that. Really overjoyed, I have learned a lot, and probably tormented
Craig and the list a lot, BUT I AM GRATEFUL!!

Thank you

Andrew

On 25/2/19 1:22 pm, Craig Sanders via luv-main wrote:
> On Mon, Feb 25, 2019 at 06:44:57AM +1100, Andrew Greig wrote:
> > Now it is time to see if I have learned anything.
> >
> > 1Tb SSD and 2 x 2Tb SATA HDDs, motherboard is a UEFI board but I have never
> > used UEFI with this board.
> >
> > Ubuntu desktop obviously ignored it.
> >
> > Using Gparted partition the SSD thus:
> >
> > 512MB EFI Partition /dev//sda1, formatted FAT32, primary   (boot?)
> > 8 Gb allocated to SWAP
> >
> > rest of the disk _*/*__dev/_sda2, primary, formatted btrfs mounted as /
> Looks good except that if you have an EFI partition (sda1) and a swap
> partition (sda2) then the btrfs root partition will be sda3, not sda2.
>
> > Partition the first HDD /dev/sdb1primary, formatted btrfs , mounted as /data
> >
> > Second HDD same size leave completely blank
> The second 2TB HDD (sdc) needs to be partitioned exactly the same as the first
> (sdb).
>
> I don't think gparted will let you create a partition without formatting it
> (as a filesystem or as swap or whatever), so just let it format the partition
> and then run the btrfs commands as below.
>
> > After the system is installed add the second HDD withtwo lines in the
> > terminal
> >
> >    sudo btrfs device add -f /dev/sdc1 /data
> >    sudo btrfs balance start -dconvert=raid1 -mconvert=raid1 /data
> You can do this after the system is installed, or from a root shell while
> booted on gparted.  It doesn't matter either way, although it's probably
> better/easier to do it from gparted (there'll only be the one /data fs for the
> ubuntu installer to detect so you won't end up with /data0 and /data1 again)
>
> craig
>
> --
> craig sanders 
> ___
> luv-main mailing list
> luv-main@luv.asn.au
> https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main

- End forwarded message -
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Proposed road map for 1 SSD & 2 HDDs set up in btrfs as RAID1

2019-02-24 Thread Craig Sanders via luv-main
On Mon, Feb 25, 2019 at 06:44:57AM +1100, Andrew Greig wrote:
> Now it is time to see if I have learned anything.
>
> 1Tb SSD and 2 x 2Tb SATA HDDs, motherboard is a UEFI board but I have never
> used UEFI with this board.
>
> Ubuntu desktop obviously ignored it.
>
> Using Gparted partition the SSD thus:
>
> 512MB EFI Partition /dev//sda1, formatted FAT32, primary   (boot?)
> 8 Gb allocated to SWAP
>
> rest of the disk _*/*__dev/_sda2, primary, formatted btrfs mounted as /

Looks good except that if you have an EFI partition (sda1) and a swap
partition (sda2) then the btrfs root partition will be sda3, not sda2.

> Partition the first HDD /dev/sdb1primary, formatted btrfs , mounted as /data
>
> Second HDD same size leave completely blank

The second 2TB HDD (sdc) needs to be partitioned exactly the same as the first
(sdb).

I don't think gparted will let you create a partition without formatting it
(as a filesystem or as swap or whatever), so just let it format the partition
and then run the btrfs commands as below.

> After the system is installed add the second HDD withtwo lines in the
> terminal
>
>   sudo btrfs device add -f /dev/sdc1 /data
>   sudo btrfs balance start -dconvert=raid1 -mconvert=raid1 /data

You can do this after the system is installed, or from a root shell while
booted on gparted.  It doesn't matter either way, although it's probably
better/easier to do it from gparted (there'll only be the one /data fs for the
ubuntu installer to detect so you won't end up with /data0 and /data1 again)

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: shrinking btrfs (was Re: Boot problems after adding 2 new disks)

2019-02-23 Thread Craig Sanders via luv-main
On Sun, Feb 24, 2019 at 03:15:19PM +1100, Andrew Greig wrote:
> This should be my last message on this issue (I sincerely hope so as I have
> probably redefined the meaning of "needy")
>
> I lost the message related to the setting up of one btrfs drive and then
> using the force (-f) feature to get it to add the device to the array

i don't know which message you're referring to. i already re-sent one to you,
but it seems that wasn't the right one. you can find all messages in this
thread in the LUV archives at:

https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


shrinking btrfs (was Re: Boot problems after adding 2 new disks)

2019-02-23 Thread Craig Sanders via luv-main
[ you accidentally sent this Q as private mail. replying back to the luv-main
list ]

On Sun, Feb 24, 2019 at 08:33:25AM +1100, pushin.linux wrote:
> Hi Craig,I was wondering if btrfs allows "shrinking" a patition to create
> free space, and if swap at the end of an SSD was better than at the start of
> a standard SATA drive

that's the kind of question that a search engine like google or
duckduckgo is good for. Also Q sites like https://askubuntu.com/ or
https://unix.stackexchange.com/

It's been years since I used btrfs for anything real (i use ZFS), so I
searched for "shrink btrfs partition" and found that it is possible.  But
first you need to know that resizing ANY filesystem always involves two steps:
resizing the fs itself, and resizing the partition that it's on. and the order
of those two steps depends on whether you are shrinking or enlarging the fs.
to shrink an fs, you first shrink the fs itself and then the partition. to
enlarge, you first enlarge the partition and then the fs.


For single-disk btrfs like on your root fs, it's fairly easy, just boot with
the "gparted live" CD/USB[1] and tell it to resize your root btrfs partition
(sda2, i think).  That will resize both the fs and the partition.

For a btrfs pool with multiple partitions/disks, it's more complicated
because gparted operates on individual drives so it doesn't resize all of the
drives/partitions in a btrfs fs at once. You have to resize each partition in
the btrfs pool separately. e.g. if you wanted to resize your /data filesystem,
you'd first have to run "gparted /dev/sdb", resize sdb2, and then "gparted
/dev/sdc" and be careful to change sdc2 to EXACTLY the same size as sdb1.



Personally, for a relatively trivial 4 or 8GB of swap space, i don't think
it's worth the bother or the risk - messing with partitions is always a risk,
it is very easy to make a mistake and that leads to data loss.

swap *IS* faster on an SSD (everything is faster on an SSD), but when you get
24GB RAM installed your system isn't going to be swapping much - it certainly
won't be thrashing stuff in and out of swap and causing performance problems.
swap usage will be occasional data + code that hasn't been in use for a while.



[1] https://gparted.org/

the gparted web site also has lots of useful info about partitioning and
filesystems, so is a good place to learn the whys and wherefores of all this
stuff.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Boot problems after adding 2 new disks

2019-02-23 Thread Craig Sanders via luv-main
On Sat, Feb 23, 2019 at 06:32:43PM +1100, Andrew Greig wrote:

> > 1. unmount both of them
>
> $sudo umount /dev/sdb1 && umount /dev/sdc1  ?

or "sudo umount /data0 /data1"

as long as no process has any file open under those directories (and that
includes having a shell with it's current working directory in either of
them - you can't unmount a filesystem that is being actively used), both
directories will be unmounted.


> > 2. remount ONE of them (say, data0) as /data (and edit /etc/fstab so that it
> > gets mounted as /data on every reboot. also delete the line in fstab that
> > mounts data1).
>
> Here is my current fstab (please note, partition manager took me an hour and
> a half to negotiate and I was unable to install swap on my SSD so I put a
> swap partition on each of the two SATA drives so that they would be exactly
> the same size. )

That's a shame because swap on SSD is much faster than swap on HDD.  Of course
when you upgrade your RAM, it probably won't swap much.


Once you get your RAM upgrade installed, I strongly recommend that you install
libvirt and virt-manager and create some VMs to play with. e.g. make a VM and
give it three 5GB disk image files (i.e. similar to your current system with
three drives).

Then install ubuntu onto it. you can mess around with the partition manager
(or even fdisk on the command line) until you understand how it works without
risking anything on your real system. and try different variations on the
build (e.g. install ubuntu onto one of the VM's virtual disks, boot it up,
and then manually partition the other two virtual disks and aformat them with
btrfs and add them to fstab. and experiment also with other filesystems and/or
mdadm and/or lvm2 if you like).

That's one of the things VMs are good for, to experiment and test things and
especially to learn. In fact, they're an excellent way to learn stuff.  Things
like partition management and formatting partitions are hard and a bit scary
because they are things that are very rarely done by most people - only when
building a new machine or adding new drives to a machine. Practice is the only
thing that will make it familiar and comfortable.  Do this every few months
to keep the memory fresh so that you will know what to do and how to do it
if/when you ever need to.



> # /data0 was on /dev/sdb2 during installation
> UUID=0e8718c8-03bf-4f1a-915f-df03fe117dc0 /data0  btrfs defaults 0
>2

edit this line, change data0 to data.

> # /data1 was on /dev/sdc2 during installation
> UUID=5969127b-f5e0-40dc-98ba-ea7252c9ee41 /data1  btrfs defaults 0
>2

delete or comment out this line.


then, save & exit, and run "sudo mount /data"



> # /efi was on /dev/sda1 during installation
> UUID=b588608e-8cf7-43be-8a53-03dfde6f8f15 /efibtrfs defaults 0
>2

the EFI partition should be FAT32.  UEFI can't use btrfs.  I guess that means
it's not being used at all - your machine is either old-fashioned BIOS or, if
UEFI, it's configured for legacy (BIOS) boot.


> > 3. destroy the partition table on the data1 drive, and recreate it (again,
> > one big partition for the entire disk[1])
>
> So by deleting the partition we eliminate the FS (btrfs) and in the addition
> step the FS is rebuilt?? but specifically to control both disks?

No, it's just deleting and re-creating the partition. creating a partition and
formatting it are two different things.  A partition is just a chunk of disk
space reserved for some particular use.  That use can be to be formatted as
one of several different filesystems (ext4, xfs, btrfs, fat32, etc etc), to be
used as swap space, for an lvm physical volume (PV), or just left unused.


But now that i know you've got a swap partition on there, DON'T DELETE THE
ENTIRE PARTITION TABLE.  Just delete /dev/sdc2. better yet, don't bother
deleting it at all, this step can be skipped.

You can actually skip step 3 entirely: the '-f' option used in step 4 ('btrfs
device add -f ...') should force it to use /dev/sdc2 even though it is already
formatted as btrfs.

> Can /dev/sdc2 can be deleted with gparted?

yes.


> > 4. add that drive to the existing btrfs array on /data
> >
> > e.g. *IF* /data1 was sdc1, you'd do something like:
> >
> >  sudo btrfs device add -f /dev/sdc1 /data
> >  sudo btrfs balance start -dconvert=raid1 -mconvert=raid1 /data

change sdc1 here to sdc2.


> > The earlier you do this (i.e. the less data is already on it), the faster
> > this conversion to raid1 will be.  Nearly instant if there's little or no
> > data.  Much longer if there's a lot of data that needs to be synced to the
> > other drive.
> >
> > i.e. best to do it before copying the data from your old drive.
>
> I have about 4Gb only of data from this morning's photo shoot, I can move
> that back to /home/andrew easily enough. I just tried the Data drive to see
> how my CHOWN went. ( I cheat, I use mc)

No need.  4GB of data will be synced in very little time.


craig

--

Re: Boot problems after adding 2 new disks

2019-02-22 Thread Craig Sanders via luv-main
On Sat, Feb 23, 2019 at 04:26:25PM +1100, Andrew Greig wrote:
> Referring to an earlier message about my data drives, do I need to CHOWN
> those drives to andrew:andrew and then set the permissions to rwx?

I think i said perms should be 664. that was wrong. the execute bit is needed
to access a directory, so it should be 775 (rwxrwxr-x).

770 (rwxrwx---) would also work if you didn't want any other accounts on the
system (other than root and andrew, and any accounts that you add to group
andrew) to access it.

the chown and chmod commands need to be run so that your user is able to read
and write to the /data directory.  Otherwise it'll be owned by root and only
writable by root.




NOTE: the chown and chmod need to be done while /data is mounted.  This only
needs to be done once, and will retain the owner & permissions metadata
whenever it is remounted (e.g. on a reboot).

if you do the chown & chmod while the /data fs isn't mounted, you'll only
be changing the permissions of the empty mount-point directory, not of the
filesystem.

> I think you mentioned a symlink, would that be necessary if I have done the
> CHOWN?

the symlink was for convenience only. useful but not necessary. mostly so that
you can just navigate to your home dir and double-click on the symlink in any
GUI file chooser dialog. or from the command line "cd ~/data".

> How do I set up the RAID1 on the Data0 and Data1 drives, please?

see my previous message.  you should have only a /data fs combining
both the 2TB drives into a single btrfs raid1 array.

> I have btrfs on all drives. I am amazed at the speed of an SSD.

Yeah, they're bloody fast, aren't they?  and NVME SSDs are even faster.

> I will pick up the RAM and a cradle for the SSD as it does not fit anywhere
> in my case. It is just sitting in there at present.

There are no moving parts in an SSD, so it's safe to leave it just hanging
loose indefinitely until you get a cradle for it.  I wouldn't do that for a
HDD except in some of data-recovery emergency, but it's not a problem for an
SSD.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Boot problems after adding 2 new disks

2019-02-22 Thread Craig Sanders via luv-main
On Sat, Feb 23, 2019 at 03:42:57PM +1100, Andrew Greig wrote:

> Now I need to plug in my old SATA drive and copy my data to one of my data
> drives.
>
> Small thing, when I was setting the partitions the system did not like /data
> on two separate drives so for the moment one is /data0 and the othe is
> /data1. If I load one of the drives with my data, as soon as RAID is setup
> will that data copy across to the other drive?
>
> And the next step is RAID

Well, kind of. Not if you're talking about using mdadm for RAID-1. btrfs does
its own raid. and volume management. There's no need for mdadm or lvm or
anything else.  If you've used btrfs for those drives then what you need to do
is:

1. unmount both of them

2. remount ONE of them (say, data0) as /data (and edit /etc/fstab so that it
gets mounted as /data on every reboot. also delete the line in fstab that
mounts data1).

3. destroy the partition table on the data1 drive, and recreate it (again, one
big partition for the entire disk[1])

4. add that drive to the existing btrfs array on /data

e.g. *IF* /data1 was sdc1, you'd do something like:

sudo btrfs device add -f /dev/sdc1 /data
sudo btrfs balance start -dconvert=raid1 -mconvert=raid1 /data


The earlier you do this (i.e. the less data is already on it), the faster this
conversion to raid1 will be.  Nearly instant if there's little or no data.
Much longer if there's a lot of data that needs to be synced to the other
drive.

i.e. best to do it before copying the data from your old drive.




[1] technically, you don't need a partition, btrfs can use the entire disk.
but IMO a partition table is useful for clearly identifying that a disk is in
use and what it is being used for.  It doesn't hurt in any way to have one
and the space used by the partition table is trivial - at most, a sector for
the partition table itself and another 2047 sectors[2] to ensure that the first
sector of the first (and only) partition is aligned at a 4K sector boundary.
i.e. 1MB out of your 2TB drive.


[2] it's not uncommon on disks with GPT partition tables (instead of the old
style ms-dos partition tables) to create a tiny partition in that area with
type EF02 for grub, especially if they're ever going to be used to boot grub.


craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Boot problems after adding 2 new disks

2019-02-22 Thread Craig Sanders via luv-main
On Sat, Feb 23, 2019 at 02:30:46PM +1100, Andrew Greig wrote:

> Unfortunately I am stuck in tty1, I thought that the GUI was on tty7, but I
> have forgotten how to get there. I thought it used to be CTRL ALT F7

If you have a display manager (xdm, gdm, kdm, lightdm, etc) installed, it will
start up automatically and give you a graphical login.

> I have a system, now, but not usable by me at this stage.

Did you install gnome or kde (or xfce or whatever desktop environment you
prefer)? And all the GUI apps you intend to use?

This may be because you chose to do a "server" install.  I have no idea
what Ubuntu actually means by that, but I'd guess it doesn't include
X or other GUI stuff because they're generally not needed on "servers".

But it's not a big problem, nothing to worry about.  You can always apt-get
install whatever you need, whether you chose a "desktop" or a "server"
install.  It's the same OS, just with different programs installed by default.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Boot problems after adding 2 new disks

2019-02-22 Thread Craig Sanders via luv-main
On Fri, Feb 22, 2019 at 05:33:53PM +1100, Andrew Greig wrote:
> I have purchased a new 1Tb SSD and I have two unused SATA 2Tb drives, and
> currently 8Gb RAM (max capacity 32Gb DDR3 1866) I will settle for 24Gb soon.

24GB is nice.  With that and the SSD, you should see an enormous boost in
performance.  No more twiddling your thumbs waiting for it to boot.

of course, not long after you get used to the new speed, it'll start to seem
unbearably slow :)

> MB = ASRock 890 GM Pro3 5 sata slots

I guess that means you have a Phenom II CPU or maybe one of the early FX
series chips.  Nice CPUs for their day, and still pretty good even today. most
of my machines have these.

If you have an FX CPU, they're happiest with DDR3-1866 RAM.  DDR3 is slowly
disappearing from the market so you have to get what's available - other
speeds will work if you can't get 1866, but 1866 is optimal.

BTW, if you're not sure exacly what CPU you have, run 'lscpu | grep
Model.name'.  You'll see output like this:

# lscpu | grep Model.name
Model name:  AMD FX(tm)-8320 Eight-Core Processor

or

# lscpu | grep Model.name
Model name:  AMD Phenom(tm) II X6 1090T Processor


> Question - Should I choose Ubuntu 18.04 LTS or install 18.10 which will need
> an upgrade at the end of July?

It really depends on whether you want to upgrade every 6 to 12 months (18.10),
or every two years (LTS).

Stuff like gimp and darkroom tend to be fairly fast moving, so upgrading them
every six months or so is probably a good idea.

I'm generally in favour of keeping systems upgraded regularly.  IMO two years
is two long between upgrades.  Free Software development moves way too fast
for that.


craig

PS: what kind of GPU do you have? if you do a lot of graphical work, it may
be worthwhile comparing some of the current low-end to mid-range models to
your current card.  A modern $200-$300 GPU should be 2 to 3 times faster than,
e.g., a high-end GPU from 5 years ago, and use significantly less power. but

this is definitely something that needs significant research before buying
anything. googling "old model name vs new model name" gets good results. e.g.
"gtx-560 vs gtx-1050" leads to several review sites which say that the 1050
(~ $170) is roughly 87% (1.87x) faster than the 560, and uses only 75 Watts
rather than 150 W.  The next model up, a "1050 Ti" is a bit over twice as fast
and costs about $200, also using 75W. and the GTX-1060 3GB model is about 3.65
times as fast as a GTX-560 and costs about $250 (using 120 W)

BTW, "2-3 times as fast as what I currently have for $200-$300" is generally
what I wait for when upgrading my GPU.  Unless noise and power usage is a
problem, it's not really worth the cost of upgrading for anything less.
Sometimes, though, new features of newer cards (like better video decoding or
newer opengl/vulkan) makes it worth upgrading earlier.

There are various AMD Radeon models of simialar performance and price.  Unless
you're willing to use the proprietary nvidia driver, you're better off with an
AMD GPU. their open source driver is much better than the open source nouveau
driver for nvidia.  I mostly use nvidia cards with the proprietary nvidia
driver (the AMD fglrx driver always sucked and the open source drivers for
both amd and nvidia used to suck. now they're kind of decent, especially the
AMD driver, unless you do a lot of 3D gaming at 1440p or better with all the
pretty turned up to Ultra)

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Boot problems after adding 2 new disks

2019-02-22 Thread Craig Sanders via luv-main
On Fri, Feb 22, 2019 at 08:20:48PM +1100, Andrew Greig wrote:

> The 1Tb is an SSD for speed and I have another 2 x 2Tb drives for my data.
> After 3 years of photography and 13,000 images in raw, proofs and full size
> jpgs I have around 500Gb of data. This should meet my needs for 2 years at
> least at which time I will build a bigger machine.
>
> I am in the partitioner at present, manual  chosen,
>
> I want root on the SSD
>
> LVM VG ubuntu-vg LV root - 2.0TB Linux device-mapper (linear)  is what I am
> presented with
>
> so do I need to change root to home?
>
> LVM VG ubuntu-vg, LV  swap_1 - 1.0 GB Linux device-mapper (linear)

You don't need LVM if you're using btrfs, it doesn't give you anything that
btrfs doesn't - it'll just make your disk management more complicated. Delete
the partition tables from all 3 drives and create them manually.

1. sda  (1 TB SSD)

You'll need a partition for EFI (optional), a swap partition and a btrfs
partition for the root fs.  4 or 8GB should be plenty for swap. the btrfs
partition should be the remainder of the disk.

If you're motherboard is old-style BIOS rather than UEFI, you don't need a
FAT32 partition.

sda (if BIOS):

4-8GB swap
remainder for btrfs root fs

sda (if UEFI or if you think you might move this disk to a UEFI machine in 
future):

512 MB EFI partition
4-8GB swap
remainder for btrfs root fs

Setting this up with btrfs now gives you the option of easily converting it
to raid-1 later. just add an identical drive, partition it exactly the same,
and tell btrfs to add the new partition to the existing one. btw, because the
second drive has identical partitioning, you'll have another free partition
the same size as your swap partition. you can use that for more swap, or
format it and use it for /tmp or something. i'd just add it as more swap.

Using btrfs for the root fs also allows you to use btrfs snapshots, and btrfs
send for backups.


2. both sdb and sdc (2 x 2TB HDD):

1 big partition for btrfs /data.

the installer should ask you where you want to mount this (/data) when you set
it up.


> So how do I partition this so that root and boot are on the 1.0TB SSD

You don't really need a separate partition for /boot, it works just fine as a
subdirectory of /. Old-timers like me only do that out of habit from the days
when it was useful to do so.

> and so that /home is the RAID array of two disks of 2TB each?

I'd leave /home on the SSD - it's fast, and it's much bigger than you need for
the OS.  Having all your config files and browser cache and the data you're
currently working with on the SSD will be a huge performance boost.

Use /home on your SSD as fast working space (editing your images and videos
on the SSD will be MUCH faster than editing them on the HDD), and move the
completed work to subdirectories under /data - i.e. use /data for long-term
bulk storage.

So, as noted above, format the 2 x 2TB drives with btrfs and mount them as
/data.

for convenience, you can make /data owned by your user and symlink it
into your home directory (which will let you access it as /data and/or as
/home/yourusername/data):

sudo chown yourusername:yourgroupname /data
sudo chmod 664 /data
ln -s /data/ /home/yourusername/

When you restore your data from your old 1TB HDD, remember to copy it
to subdirectories under /data, rather than under /home.


BTW, if there's any possibility that you might want to use some of the space
on /data for something not directly related to or belonging to your user (e.g.
if you have a second user on the machine, or want to use it for a squid cache
or web site or to store VM images or whatever), then don't use the top level
of /data directly. use a subdirectory with, e.g., the same name as your user
name.

i.e. instead of the commands above, try something like this instead:

sudo mkdir /data/yourusername
sudo chown yourusername:yourgroupname /data/yourusername
sudo chmod 664 /data/yourusername
ln -s /data/yourusername /home/yourusername/data

I recommend doing this anyway even if you don't think you'll need it.  It
doesn't hurt to have it, and if you ever change your mind it's already set up
to make it easy to use for other purposes.

> I am in Guided Partitioning at present, next step is Configure Sotware RAID
>
> Then Configure the Logical Volume Manager
>
> Then configure encrypted volumes
>
> Then configure iSCSI volumes

Ignore all that.  You don't need LVM or iscsi, and I'm guessing you don't care
about or want the complications of full disk encryption.

Just set up 2 btrfs filesystems, one for the rootfs, the other for /home/

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


RAM & LUV's HW Library (was Re: Boot problems after adding 2 new disks)

2019-02-21 Thread Craig Sanders via luv-main
On Fri, Feb 22, 2019 at 10:22:38AM +1100, Russell Coker wrote:
> In regard to the hardware advice. The LUV hardware library often has DDR3
> RAM for free, but 4G modules don't hang around long. If anyone is upgrading
> from a DDR3 system to DDR4 please donate your old RAM as lots of people have
> a use for this.

When I get around to upgrading my systems to use DDR-4, I'll have a bunch of
8GB DDR-3 sticks to donate (with speeds ranging from DDR3-1333 to DDR3-1866).
That won't be for some time, though.

my current plan is to merge my mythtv box (FX-8150, 16GB RAM) and my
file/dns/web/kvm/everything-server (phenom ii 1090T, 32GB RAM) into a single
threadripper 2920x or 2950x machine with at least 64GB (not because i need
that many CPU cores, but because I really need the PCI-e lanes...Ryzen 5
& 7 only have 20 lanes, which is not enough for GPU+DVB cards+SAS cards.
Threadripper has 64 lanes).

I can't afford to do that any time soon, though.  Even if i could find
somewhere that had the last-gen 8-core 1900x TR4 in stock (around $450, in
theory, if available, vs the ~ $1000 for 2920x or ~ $1400 for 2950x), 64GB of
new DDR-4 RAM would cost around $800 and a good X399 motherboard to suit would
cost around $400, for a minimum build cost of $1650 or so.


> Also we need more SATA disks, if anyone has disks of 300G+ that they don't
> need then please donate them.

Don't have any spare old drives, though.  I use them until they die.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Boot problems after adding 2 new disks

2019-02-21 Thread Craig Sanders via luv-main
On Thu, Feb 21, 2019 at 11:14:13PM +1100, Andrew Greig wrote:
> Looking at the disks in gparted I have:
>
> /dev/sda1
> File system lvn2 pv
> Label
> UUID sI0LJX-JSme-W2Yt-rFiZ-bQcV-lwFN-tSetH5
> Volume Group ubuntu-vg
> Members /dev/sda1  /dev/sdb1
> Partition /dev/sda1
> Name
> Flags boot/lvm
>
> /dev/sdb1
> File system lvm2 pv
> Label
> UUID  9HV3H6-JIYu-IdaS-2CGr-lkZQ-9xcB-RVu9Ks
> Status  Active
> Volume group /dev/sda1  /dev/sdb1
> Logical Volumes root  swap-1
> Partition Path /dev/sdb1
> Name
> Flags lvm
>
> /dev/sdc1
> File system  lvm2 pv
> Label
> UUID mqbYsB-xpm2-7c11-RLN5-q47a-A0bB-wcefad
> Status Not active(not a member of any volume group)Volume Group
> Members
> Logical Volumes
> Partition Path /dev/sdc1
> Name
> Flags lvm

It looks like you've added one of the two new 3TB drives to the same volume
group as your root fs and swap partition.  The other 3TB drive has been turned
into an unrelated volume group.   Why?

Which drive is the old 1TB drive?  and which are the new 3TB drives?

My *guess* is that sdb1 is the old 1TB drive (because that's the only one
where the root and swap-1 LVs are mentioned).  If that's the case, then I'll
also guess that the 1TB drive is plugged into the second SATA portso when
you plugged the new drives in, you plugged one of them into the first SATA
port.  Try swapping the cables for those two drives around so that the 1TB
drive is in the first port.

try running 'fdisk -l'.  That will show each disk and all partitions on
it, including the brand, model, and size of the drive. knowing the logical
identifiers is only half the story, you also need to know which physical drive
corresponds to those identifiers.

Once you have this information, i strongly recommend writing it down or
printing it so you always have it available when planning what to do.


> My current fstab is this
> andrew@andrew-desktop:~$ cat /etc/fstab
> # /etc/fstab: static file system information.
> #
> # Use 'blkid' to print the universally unique identifier for a
> # device; this may be used with UUID= as a more robust way to name devices
> # that works even if disks are added and removed. See fstab(5).
> #
> #
> /dev/mapper/ubuntu--vg-root /   ext4errors=remount-ro 0 1
> /dev/mapper/ubuntu--vg-swap_1 noneswapsw  0 0
> andrew@andrew-desktop:~$
>
> So /dev/sdb1 is part of a lvm group but /dev/sdc1 is not
>
> What command do I use to get these added to the fstab? I haven't consciously
> formatted either of the two new drives,is there a step I have missed?

dunno, there isn't enough info to safely give any direct instructions. the
best I can give is generic advice that you'll have to adapt to your hardware
and circumstances.

But the first thing you need to do is undo the existing mess - why did you add
one of the new drives to the existing volume group (VG)? and, since you added
the new drive, why didn't you just create a new logical volume (LV), format
it, and start using it?

You'll need to check that it isn't being actively used in the VG, and then
remove that drive from the VG before you do anything else.


> I haven't got the dollars for a M/B upgrade so I will purchase some more
> DDR3 Ram to get me to the limit of the motherboard, and I will purchase a
> SDD as recommended. It wouldf be nice to get thses disks running so that
> I can dump my data on to them and then add the SDD and do a fresh install
> using btrfs, which, I believe will give me an effective RAID 1 config.

The SSD or SSDs should be used for grub, the root fs /, the EFI partition (if
any), /boot (if it's a separate partition and not just part of /), and swap
space. the 3TB drives are for your home directory and data.

You don't want to mix the SSD(s) and the hard drives into the same btrfs
array.

You can, however, have two btrfs arrays: one for the boot+OS SSD(s), the other
for your bulk data (the 3TB drives).  If all your data is going to be under
your home directory then mount the latter as /home.  If you're going to use it
for other stuff too, mount it as /data or something and symlink into it (e.g.
while booted in recovery mode, or logged in as root with nothing running as
your non-root user: "mv /home /data/; ln -sf /data/home/ /")

BTW, if you only get one SSD but plan to get another one later, btrfs allows
you to convert it to RAID-1 at any time. So does ZFS, you can always add a
mirror to a single drive. To do the same with mdadm, you have to plan ahead
and create an mdadm degraded raid-1 array (i.e. with a missing drive) when you
partition and format the drive.


Probably the easiest way to do this is to remove ALL drives from the system,
install the SSD(s) into the first (and second) SATA ports on the motherboard,
and the two 3TB drives into the third and fourth SATA ports.  Examine the
motherboard carefully and check the m/b's manual when choosing which port to
plug each drive into - the first port will probably be labelled SATA_0 or
similar.


Re: Boot problems after adding 2 new disks

2019-02-20 Thread Craig Sanders via luv-main
On Wed, Feb 20, 2019 at 10:25:13PM +1100, Andrew Greig wrote:
> I apologise for my carelessness. In the days when I needed frequent help
> (2000 - 2007) bottom posting was preferred, and so I defaulted to that
> position. It was not laziness, just a lack of awareness that I included too
> much of the thread. Most of my early days of assistance were fixed within
> one or two posts.

Bottom posting has NEVER been preferred. it has always been reviled,
especially in tech forums.  Top posting is worse in some ways (in that it
screws up the chronological order of quites), but at least the reader doesn't
have to scroll past hundreds of lines of repeated text.

Edited quotes with interleaved replies is the only good way to do quoting.


> That this has dragged on so long is a frustration for me. I made a mistake
> when I first loaded Ubuntu in that I did not have the other two drives
> available, then, by installing a user system instead of a server system I
> precluded setting up the two drives in RAID or btrfs.

I'm sure that your current situation is fixable, but it requires a fair bit
of knowledge and experience about drives and partitions and filesystems. and
the boot process.  It also requires a detailed log of the boot process (which,
as i mentioned i my last message, is hidden by the useless ubuntu boot logo.
because branding is more important than technical info).

> And this is what has led to believe that my easy way out of this is to do a
> clean install with all my drives connected and choose "server" and hopefully
> the bouncing ball will get me to a cheerful conclusion.

That may be the easiest solution.  remember to backup your data first :)

Also, as I said in my last message:

1. upgrade your RAM.  16GB minimum if you're running gimp and darktable and a
browser and who knows what else.

2. consider getting an SSD (or a pair of them in RAID-1) for the boot/OS
drive.

The cheapest SSDs start at around $30 for 128GB these days and will be MUCH
faster than any mechanical drive. 128GB is enough for the kernel and the root
fs, use the new 3TB drives as /home.  A 256GB Crucial MX500 is about $75, with
performance of about 560 MB/s read and 510 MB/s write (approx 4 or 5 times
faster than any mechanical drive).

Having two drives in RAID-1 not only adds redundancy to the storage, it will
generally double the read speed (but not the write).  IMO if money is tight,
having two 128GB drives in RAID-1 is better than one 256GB drive...but note
that most 128GB SSDs are older, last-gen technology.

If your motherboard has NVME slots, then it's worth paying the extra $45 for
something like the 250GB Samsung 970 EVO PLUS (~ $120) - around 3500 MB/s
read and 3300 MB/s write.  About six times faster than a SATA3 SSD.  I still
recommend buying a pair so you can have RAID-1, which doubles the price to
$240 (but note that the 500GB model is $169, or $338 for a pair, so is much
better value for money).


BTW, if your m/b doesn't have nvme slots you can get PCI-e cards that have
1, 2, or 4 nvme slots on thembut it's worth doing research before buying
because some can boot off the nvme and some can't. here's something not too
old as a starting point:

https://forums.anandtech.com/threads/what-pcie-add-in-cards-can-boot-a-nvme-m-2-ssd.2498673/


> So I will read your response in the morning when I am fresh, and I am
> grateful for your continued assistance. I thought that by using a raid
> system or the btrfs then I may have had some security for my data, but maybe
> I should just use the now substantial amount of storage I have and just buy
> more cloud space when I need it.

RAID (in any form, including mdadm, lvm, btrfs or zfs) is good.  It greatly
reduces the number of times you NEED to restore from backup (and filesystem
snapshots as provided by btrfs and zfs do too)but remember that backups
will still always be necessary.  RAID IS NOT A SUBSTITUTE FOR BACKUP.

(also, RAID-0 is not really RAID and provides NO redundancy.  It actually
increases your risk of catastrophic data loss).

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Boot problems after adding 2 new disks

2019-02-20 Thread Craig Sanders via luv-main
On Wed, Feb 20, 2019 at 09:33:21PM +1100, Andrew Greig wrote:
> I have peace of mind about the ECC or not issue. I have a machine which
> boots slowly compared with even ten years ago. One needs to boot it up and
> then log in, and go and make a cup of coffee have a chat with a friend over
> the phone, and then it may be ready to perform,

What are you running on that machine? and how much RAM does it have?  And does
the motherboard have any RAM slots free? if so, upgrading RAM is still the
single best and cheapest way to improve performance on most machines. IIRC
your machine is fairly old, so it takes DDR-3 RAM rather than the newer DDR-4.
A 4GB DDR-3 stick is about $36 these days.  An 8GB DDR-3 stick is about $65.
You should upgrade RAM in pairs, so either 2x4GB or 2x8GB.

Are you running systemd? if so, have you tried running 'systemd-analyse blame'
to see where the boot delays are ocurring?


Are you running something that scans the entire drive on every boot? something
like the ancient gnome beagle or kde's nepomuk or baloo?


> but it still takes a minute to load the first web page over a high speed
> cable connection.

Are you running firefox or chromium? both of them are RAM hogs, but chromium
is much worse - it uses several times more RAM than firefox to display the
same or fewer number of tabs.

does your internet connection gets started automatically on boot, or only on
demand when something (like your web browser) tries to connect to somewhere on
the internet?

are you running a local DNS cache? or a web proxy cache?  A local DNS caching
resolver is **definitely** worth having.  A web proxy, may be worth having if
you vist the same sites repeatedly or if there are any other computers on the
network visiting the same sites.


> 20 seconds to open the file manager, about the same to open
> Thunderbird. Boot times are becoming a bit like Microsoft 's BSOD used to
> be, an unfortunate fact of life.

that sounds like a combination of insufficient RAM, and slow swapping.


> I am still unsure how to use gparted to get the disks recognised by the
> system. I can hot plug them and the system will not crash, but if I try to
> boot with them connected it will fail to boot.

Try it again without the "quiet splash" options in the grub boot entry. Ubuntu
adds these annoying options to hide the nasty horrible text that shows what is
happening when the machine boots and replace it with a pretty but completely
useless and uninformative graphical logo. Yay. Most of the time, you don't
need to see the kernel boot up messages...but when you DO need them, there is
no subsitute for them.  IMO, it's criminal negligence to hide them away as if
they're some dirty little secret rather than vital diagnostic information.

Without this information, it's very hard to figure out what the problem is.



Anyway, ignoring my rant, instead of hitting enter or waiting for grub to time
out, hit "e" to edit the grub entry. look for the line with "quiet splash" on
it and remove those two options.

Hit F10 or Ctrl-X to boot.  This change is not permanent, it only affects the
current boot.

Alternatively, choosing the "recovery mode" option from the grub menu may give
you the same result.  It should also give you a password prompt to get a root
shell which you can use to investigate and fix the problem (you will need to
run "mount -o remount,rw /" to be able to edit the root fs).



Another alternative:

0. make a backup copy of your grub default file.  e.g.:

sudo cp -a /etc/default/grub /etc/default/grub.20190220

1. sudo vi /etc/default/grub (or use nano or whatever your favourite editor is)

2. remove "quiet" and "splash" from wherever they occur (either or both of
   GRUB_CMDLINE_LINUX_DEFAULT and GRUB_CMDLINE_LINUX)

3. Uncomment the "#GRUB_TERMINAL=console" line by removing the # at the start.

4. save and exit

5. sudo update-grub

This will get rid of the "quiet splash" options permanently.


> My /etc/fstab file has little information to copy, which is why I feel that
> a new build may be the best way forward for me.

If you do that, then IMO you should seriously consider the following:

1. Upgrade the RAM in your machine to the maximum it will take.  16GB or more.
In fact, you should do this anyway even if you don't rebuild the system.

2. Buying a smallish (128 to 256 GB) SSD for the boot drive and swap space.
Optionally buy a second identical one so you can have RAID-1.  Use the two 3TB
drives in RAID-1 (with mdadm or btrfs) for bulk storage (the old 1TB drive is
ancient and should be probably be retired. or reformatted and used only for
temporary scratch space after you've copied your old files from it)





If you're going to upgrade your RAM, it may also be worth upgrading the
motherboard and CPU to something that can take DDR-4 RAM (a 16GB kit of 2x8GB
DDR-4 starts from around $160, and because DDR-4 is readily available in much
larger sizes than DDR-3 is easily upgraded all the way to 64GB or more).

A new CPU 

Re: making backports I have built recognised as providing correct package

2019-02-20 Thread Craig Sanders via luv-main
On Wed, Feb 20, 2019 at 08:11:16PM +1100, Mark Trickett wrote:
> Aha, another piece of using apt-get. It is brilliant, but also a very steep
> learning curve. It would be very good to have a good cheat sheet in a
> printable form.

$ apt-get --help
apt 1.8.0~rc3 (amd64)
Usage: apt-get [options] command
   apt-get [options] install|remove pkg1 [pkg2 ...]
   apt-get [options] source pkg1 [pkg2 ...]

apt-get is a command line interface for retrieval of packages
and information about them from authenticated sources and
for installation, upgrade and removal of packages together
with their dependencies.

Most used commands:
  update - Retrieve new lists of packages
  upgrade - Perform an upgrade
  install - Install new packages (pkg is libc6 not libc6.deb)
  reinstall - Reinstall packages (pkg is libc6 not libc6.deb)
  remove - Remove packages
  purge - Remove packages and config files
  autoremove - Remove automatically all unused packages
  dist-upgrade - Distribution upgrade, see apt-get(8)
  dselect-upgrade - Follow dselect selections
  build-dep - Configure build-dependencies for source packages
  clean - Erase downloaded archive files
  autoclean - Erase old downloaded archive files
  check - Verify that there are no broken dependencies
  source - Download source archives
  download - Download the binary package into the current directory
  changelog - Download and display the changelog for the given package

See apt-get(8) for more information about the available commands.
Configuration options and syntax is detailed in apt.conf(5).
Information about how to configure sources can be found in sources.list(5).
Package and version choices can be expressed via apt_preferences(5).
Security details are available in apt-secure(8).
This APT has Super Cow Powers.


> I am now considering two installs on the computer, one stable, the other
> sid, and dual booting to get the scanning. As there is one normal user, I
> should be able to set up a shared home partition and the one user in each
> install sharing the one home directory structure.  That way, I still have a
> usable system for most things, but can get at the stuff in sid at need.

That seems overly complicated but it should work. the only thing to be wary of
is to make sure that your user has the same UID and GID on both systems (which
should be the default, as debian makes users with UIDs starting from 1000)

You probably don't even need a separate /home partition.  You could just mount
the stable system (e.g. as /stable) and symlink /home/mark on the sid system
to /stable/home/mark.



Personally, I'd just upgrade to sid.  I've never considered the stable release
to be anything special, its main use to me is providing an installer to build
new systems with (that then get immediately upgraded to sid).

I'm biased, though: I've been using debian unstable since the 90s. BTW, the
only reason why "unstable" is called "unstable" is because a CD distributor
in 1994 or 1995 jumped the gun and released a "Debian 1.0" CD before it was
ready.  The name was deliberately chosen to be scary enough to discourage
anyone from doing the same thing again...it doesn't mean that it's flaky or
crash-prone.

IMO, obsessing over the "stable" release of debian misses one of the best and
most important features of debian - it's a constantly updating distribution
with new and updated stuff every day.  This occasionally (rarely) causes
problems but a) as long as you're careful and don't let apt uninstall stuff
you don't want it to, it's no big deal and b) those problems are almost always
easily fixedback in the 90s there were occasionally some huge problems,
but not since the transition from libc5 to libc6 in 1998 (which required a
very precise and complicated upgrade procedure.  I wrote a script called
autoup.sh to completely automate the procedure, taking into account the
vagaries and bugs reported by many people at the time.  This was before apt
existed, which automates the kind of dependency resolution I had to do in my
script: my script was a one-off hard-coded hack, while apt analyses the deps
etc and arrives at a solution)



The biggest problem with sid these days is the constant churn of KDE and Qt
packages (and, to a lesser extent, gnome).  I've found that the best way to
avoid problems there is to use 'apt-mark' to hold the few KDE & Qt apps I use
(okular, qdfview, calibre) so that they don't get auto-uninstalled due to
versioned dependencies (doing that causes the KDE and/or Qt packages to not
get upgraded if that would cause the help packages to be removed).

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Boot problems after adding 2 new disks

2019-02-20 Thread Craig Sanders via luv-main
Firstly, can you please configure your thunderbird mail client to NOT send
HTML mail?  Or at least send both HTML and plain text?  HTML mail really
screws up the quoting, making it very hard to tell what's quoted and what's
new.

Also, don't bottom-post.  Bottom posting is evil.  And please trim your quotes
to the bare minimum required to provide context for your response - no-one
wants to read the same quoted messages over and over again just because you
couldn't be bothered editing your messages properly.  It tells the reader "I
don't care about wasting YOUR time, as long as I save myself a few precious
seconds".

On Sun, Feb 17, 2019 at 02:08:13AM +1100, Andrew Greig via luv-main wrote:
> This my /etc/fstab
> 
> andrew@andrew-desktop:~$ sudo cat /etc/fstab

You don't need sudo to read /etc/fstab, only to edit it. it's RW by root, RO
by everyone else.

> # /etc/fstab: static file system information.
> #
> # Use 'blkid' to print the universally unique identifier for a
> # device; this may be used with UUID= as a more robust way to name devices
> # that works even if disks are added and removed. See fstab(5).
> #
> #            
> /dev/mapper/ubuntu--vg-root /ext4    errors=remount-ro0   1
> /dev/mapper/ubuntu--vg-swap_1 none   swap    sw   0   0
>
> andrew@andrew-desktop:~$ blkid
> /dev/sda1: UUID="sI0LJX-JSme-W2Yt-rFiZ-bQcV-lwFN-tSetH5"
> TYPE="LVM2_member" PARTUUID="92e664e1-01"
> /dev/mapper/ubuntu--vg-root: UUID="b0738928-9c7a-4127-9f79-99f61a77f515"
> TYPE="ext4"

If you're running LVM then you don't need to (and shouldn't, see below) use
UUIDs to mount your filesystem.  The device mapper entries provide the same
kind of consistency and uniqueness as a LABEL.

You shouldn't use UUIDs when mounting LVM volumes because any snapshots of
that fs will have the same UUID unless you change the snapshot's UUID with
something like 'tune2fs -U random' (ext4) or 'xfs_admin -U generate' (xfs).


> after hot plugging the two drives (I chose to try this to see if they would
> be picked up and configured in the same way as a USB key is detected. it
> seems that sdb and sdc have been detected
>
> dmesg gives this:
>
> [  279.911371] ata5: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
> [  279.912343] ata5.00: ATA-9: ST2000DM006-2DM164, CC26, max UDMA/133
> [  279.912349] ata5.00: 3907029168 sectors, multi 0: LBA48 NCQ (depth 31/32), 
> AA
> [  279.913799] scsi 4:0:0:0: Direct-Access ATA  ST2000DM006-2DM1 CC26 
> PQ: 0 ANSI: 5
> ...
> [  331.750805] ata4: SATA link up 6.0 Gbps (SStatus 133 SControl 300)
> [  331.751777] ata4.00: ATA-9: ST2000DM006-2DM164, CC26, max UDMA/133
> [  331.751784] ata4.00: 3907029168 sectors, multi 0: LBA48 NCQ (depth 31/32), 
> AA
> [  331.753212] scsi 3:0:0:0: Direct-Access ATA  ST2000DM006-2DM1 CC26 
> PQ: 0 ANSI: 5
>
> Since the drives have not been partitioned or formatted, should I just
> download the latest Ubuntu and install as a server, with the two drives
> taking up a RAID config?
>
> Or could I just run gparted and partition and format those disks alone?

I don't see any reason why you'd want to re-install the OS just to add some
drives.

how you partition and format them depends on what you want to do with them.
your two main options are to:

1. Add them as new physical volumes to your existing LVM volume group.  This
would allow you to expand any existing filesystems and/or create new logical
volumes to format and mount (e.g. you could create a new lv, format it with
xfs or ext4, and mount it as /media to store video & music files)

2. Partition, format, and mount as completely separate filesystem(s). e.g. if
you just want somewhere to store video or music files.  This could be done
using any filesystem, with or without RAID (either via mdadm, or by creating a
new LVM volume group, or even with btrfs or zfs)



I'd guess that the only reason you're using LVM is because that was the
default option when you first installed Ubuntu.  It doesn't seem like you're
familiar enough with it to have chosen it deliberately.  IMO unless you know
LVM well, you're generally better off with btrfs - like ZFS, it's a filesystem
that has the features of software RAID and volume-management built in, and
is much easier to use than dealing with mdadm + lvm2 + filesystem utilities
separately.

BTW, you may be tempted to use some variant of RAID-0 (linear append or
striped) to combine the two 3TB drives into one 6TB filesystem.  Don't do
that unless you're willing to risk that a single drive failure will lose
*everything* stored on that 6TB.  RAID-0 is NOT safe to use for any data of
any importance.  The only reason to use it is if you need a large amount of
fast storage for temporary filesand an SSD will be much faster than that
anyway.

(NOTE: data stored on striped raid-0 is effectively unrecoverable in case of a
single drive failure. With linear append, recovery of most of the data stored
on the non-failed drive is a 

Re: Boot problems after adding 2 new disks

2019-02-20 Thread Craig Sanders via luv-main
On Wed, Feb 20, 2019 at 08:18:59PM +1100, Morrie Wyatt via luv-main wrote:
> The ECC warnings just mean that either your motherboard doesn't support ECC
> error correcting RAM, or that you don't have ECC RAM installed.

AFAIK, you see it when the motherboard supports ECC RAM but you only have
non-ECC RAM installed - the kernel doesn't even try to load the ECC module
unless it detects that the hardware is capable of ECC.

I see this warning all the time on my machines (all with AMD CPUs - currently
a Phenom II 1090T, an FX-8150 and an FX-8320, and a Threadripper 1950x).

As you say, it's not something to worry about unless, of course, you KNOW you
paid extra for ECC RAM and it SHOULD be detected :)


ECC RAM typically costs at least 30% more than non-ECC RAM and it's typically
not available in stock in most whitebox computer shops, it's a special
request you have to go out of your way to ask for or find - so, unless you've
re-purposed an old server machine, it's not likely to be something that
someone has and doesn't know about.

> Typically you will only find ECC support on server motherboards, not
> comsumer level motherboards.

Most AMD motherboards supporting Phenom II, FX or newer CPUs support both ECC
and non-ECC RAM. i.e. since at least 2008 or so.

Intel motherboards and CPUs typically don't support ECC unless you've bought
a "server" motherboard and CPU.  Intel likes to engage in artificial market
segmentation to prevent customers from using cheaper CPUs and motherboards for
what they consider to be high-end server tasks. because near-monopoly allows
them to get away with shit like that.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: making backports I have built recognised as providing correct package

2019-02-17 Thread Craig Sanders via luv-main
On Sun, Feb 17, 2019 at 05:59:54PM +1100, Mark Trickett wrote:
> This is the issue, I will have to remove and rebuild the tiff packages. The
> relevant lines are :-
>
> libtiff5 (=${binary:version}),
> libtiffxx5 (=$binary:version})
>
> Is there a way to alter these to add the "~backport" item? or should I
> remove the "~backport" item from the version of the whole in the changelog
> file?

it's probably easiest to just rebuild these pacakages as "4.0.10-4" without
the "~backport"

it probably won't even make any difference anyway - by the time you
dist-upgrade to the next debian release, libtiff* will probably have been
upgraded a few times. if not, you could always manually force the official
debian packages to be installed with "apt-get --reinstall install libtiff5
libtiffxx5 ...".

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: making backports I have built recognised as providing correct package

2019-02-16 Thread Craig Sanders via luv-main
On Sun, Feb 17, 2019 at 04:59:46PM +1100, Mark Trickett wrote:
> Even with one of the backports, it produced multiple debs, and one of those
> depends on two others, and adding the "~backport" tag in the

it's not a "tag", it's part of the version string for the package you built.

For most Depends: entries, the version isn't important - as long as the
package is installed, the dependency will be satisfied.  But some dependencies
are versioned and require an exactly = or >= match to a specific version of a
package.

> I built tiff-4.0.10, and libtiff-dev depends on libtiff5 and libtiffxx5
> (version = 4.0.10-4 versus 4.0.10-4~backport). Without those all installed,
> dpkg-buildpackage will not build sane-backends-1.0.27.

If you've built and installed libtiffxx5 version "4.0.10-4~backport" and
sane-backends or some other source package depends on "4.0.10-4", then edit
the debian/control file for that package to change the dependency to version
"4.0.10-4~backport".

versioned dependencies have to either be an exact match for the version string
(when the dep specifies "=") or greater-than-or-equal-to a specific version
string (when the dep specifies ">=").

with some packages, you might also see versioned Conflicts lines that have "="
or "<=".

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Ongoing scanner setup, now need a bit of help with dpkg-buildpackage

2019-02-16 Thread Craig Sanders via luv-main
On Sun, Feb 17, 2019 at 10:55:52AM +1100, Craig Sanders wrote:
> 2. run 'dch -i' to change the package version number. e.g. when dch starts
> up your text editor, change the first line to something like:
>
> sane-backends (1.0.27-3.1~backport) UNRELEASED; urgency=medium

The version number (inside the parentheses) is the only thing that needs to be
changed on this line.

Note: i just tried this on my sid machine and dch aborted with some error
about not finding the original source archive, and didn't change the version
number for me. Don't know why, don't really care - it's a minor problem that
can be ignored(*), and we're not rebuilding the source packages anyway.

I just edited the changelog manually with 'vi debian/changelog' and changed
the version number, then ran dpkg-buildpackage.

(*) OTOH, if it *had* prevented dpkg-buildpackage from successfully compiling
the packages, it would have been a problem worth investigating and fixing. but
it didn't and it wasn't, so i ignored it :)

> 5. install the packages you just built with:
>
> dpkg -iBE *1.0.27-3.1~backport*.deb

This last step has to be done as root (e.g. with sudo, or in a root shell).
everything else can be done as your normal, non-root user.

BTW, the file glob pattern matches only the packages you just built, just in
case there are other .deb packages in the same dir.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Ongoing scanner setup, now need a bit of help with dpkg-buildpackage

2019-02-16 Thread Craig Sanders via luv-main
On Sun, Feb 17, 2019 at 10:16:37AM +1100, Mark Trickett wrote:
> I have managed to locate and download the source files,
> sane-backends_1.0.27.orig.tar.gz, sane-backends_1.0.27.debian.tar.gz and
> sane-backends_1.0.27.dsc and put them in a directory. I tried

extract the debianised source with:

dpkg-source -x sane-backends_1.0.27.dsc

> Now I am looking at building my own backport, and looking at using
> dpkg-buildpackage. I would appreciate advice and comments about usage,
> what to do to prepare and the like. I will be looking at web searches,

0. Make sure all the essential compilation tools are installed with:

apt-get install build-essential

1. cd sane-backends-1.0.27/

2. run 'dch -i' to change the package version number.  e.g. when dch starts up
your text editor, change the first line to something like:

sane-backends (1.0.27-3.1~backport) UNRELEASED; urgency=medium

3. run "dpkg-buildpackage -b --no-sign".  This tells dpkg-buildpackage to
build only the binary packages (-b), and not to bother signing anything (not
necessary for self-built packages, and avoids the need to create a gpg key if
you don't have one)

The first time you run this, it will probably give you an error like this:

dpkg-checkbuilddeps: error: Unmet build dependencies: chrpath libgphoto2-dev 
libsnmp-dev libsystemd-dev libusb-1.0-0-dev

install those build dep packages with apt-get, e.g.:

apt-get install chrpath libgphoto2-dev libsnmp-dev libsystemd-dev 
libusb-1.0-0-dev

and then repeat this step until the packages compile successfully.


4.  cd ..

5. install the packages you just built with:

dpkg -iBE *1.0.27-3.1~backport*.deb


craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Boot problems after adding 2 new disks

2019-02-15 Thread Craig Sanders via luv-main
On Sat, Feb 16, 2019 at 03:00:08PM +1100, Morrie Wyatt wrote:
> The one extra step you might need to add to the end of Craig's list would be
> to force a rebuild of your bootloader configuration (probably Grub) so that
> the fstab UUID / LABEL changes get propagated into grub's config files.

It certainly can't hurt to do that but it shouldn't be necessary.  Grub uses
UUIDs by default unless you tell it not to. There's a commented out option in
/etc/default/grub on debian/ubuntu systems to disable use grub's of UUID:

# Uncomment if you don't want GRUB to pass "root=UUID=xxx" parameter to 
Linux
#GRUB_DISABLE_LINUX_UUID=true

There's almost no reason why anyone would need to uncomment that.



It might, however, be worthwhile running 'grub-install' on ALL of the drives
*after* the system has successfully booted with the new drives installed.
That way the grub first stage boot loader will be available no matter which
drive the BIOS tries to boot from (this is assuming that it's an old-style
BIOS boot, rather than UEFI. UEFI is different, it loads grub directly from a
smallish FAT-32 EFI partition).

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Scanning, challenging

2019-02-15 Thread Craig Sanders via luv-main
On Sat, Feb 16, 2019 at 03:01:35PM +1100, Mark Trickett wrote:
> So I have two packages to update, plus any dependencies of libc6 of the
> later version. Now to learn how to find out what that might be.  And then to
> see where that leads. Then to install the sane-utils (v 1.0.27) package. The
> capacity to manage and do this is why I am using Debian, but still not
> sufficiently familiar with all of it. I learn by doing, but have not done
> enough of it to know and remember, yet.

I have no idea what version of libc6 you have installed (the last stable
release was ages ago, and i run sid aka unstable anyway) but if you have to
upgrade libc6 to install the new sane packages, then that will trigger an
enormous cascade of other upgrades, likely to result in a lot of breakage.

To put it mildly, this is more trouble than it's worth.

It would be easier to just upgrade to 'sid' or 'testing'.  Or compile
backports for the SANE packages yourself.

To install packages from sid or testing with apt, edit your sources.list
file (either /etc/apt/sources.list or a file under /etc/apt/sources.list.d)
and copy the line that lists "stable".  change the copy to "testing" or 
"unstable".


BTW, you can do a partial upgrade to either sid or testing by:

1.  adding entries for either (or both) of them to your sources.list. e.g.

deb http://your.nearest.debian.mirror/debian/ stable main non-free contrib
deb http://your.nearest.debian.mirror/debian/ testing main non-free contrib
deb http://your.nearest.debian.mirror/debian/ unstable main non-free contrib

2. and then adding the following to /etc/apt/apt.conf (or to a file under
/etc/apt/apt.conf.d):

APT::Default-Release "stable";

That makes packages from sid or testing available, but they will NOT be
installed unless you explicitly tell apt to install them with the '-t' (aka
'--target-release') option. e.g.

apt-get update
apt-get -V -d -u -t unstable install sane-utils libsane

That will verbosely show which packages *would* be upgraded and download them.

Verify that the command won't do anything unexpected (like removing other
packages - such as your desktop environment - or upgrading 1000 other
packages) and then run the same command without the "-V" or "-d" options to
actually install them.

As a rule of thumb: if an 'apt-get -t unstable install' wants to remove any
packages or upgrade/install lots of other packages, then abort the upgrade.


BTW. from the apt-get man page:


-t, --target-release, --default-release

This option controls the default input to the policy engine;
it creates a default pin at priority 990 using the specified
release string. This overrides the general settings in
/etc/apt/preferences. Specifically pinned packages are not affected
by the value of this option. In short, this option lets you have
simple control over which distribution packages will be retrieved
from. Some common examples might be -t '2.1*', -t unstable or
-t sid.  Configuration Item: APT::Default-Release; see also the
apt_preferences(5) manual page.


NOTE: it is generally better to upgrade entirely to testing or sid (or
backport a required package to stable) than it is to run a combination of
stable + some packages from testing/unstable.  You're far less likely to run
into library quirks and version incompatibilities that way.


> > Alternatively, download the debianised source for sane-utils and rebuild
> > for debian 9. i.e. make your own backport.
>
> That has potential, but would need the backport of the libsane as well. I am
> not yet sure of the development and build environment, nor whether I have
> the necessary installed yet.

Both the sane-utils and libsane packages are built from the same source
package, "sane-backends".

BTW, if you use dpkg-buildpackage to build the packages, it will warn you if
any build-dependencies are not met.


If any of this is outside your comfort zone, then it would be better to just
wait until either the new sane packages are in debian backports or until you
upgrade to the next debian stable release. or ask someone you trust who runs
debian stable to backport them for you.



If you want to extend your "comfort zone" then install debian stable into
a VM and try either upgrading to testing or sid in there, or back-porting
sane-backends inside the VM.  That way you can get some practice without
putting your system at risk.  VMs are great for experimenting with stuff or
learning new stuff that is potentially dangerous.  Make one stable VM and then
clone it for every new experiment - if it breaks, it's no big deal: just trash
it and make another clone to try again.

The easiest way to play with VMs is by installing the libvirt packages,
allowing you to create and manipulate VMs and VM images with 'virsh' commands.
The virt-manager package provides a GUI wrapper around virsh, which is good
enough for most usage - there's a lot that virsh can do 

Re: Boot problems after adding 2 new disks

2019-02-15 Thread Craig Sanders via luv-main
On Sat, Feb 16, 2019 at 01:02:44PM +1100, Andrew Greig wrote:
> I  shut it down and removed the 2 SATA cables from the MB and booted up -
> successfully.

I didn't notice this before.  You can edit /etc/fstab to change to UUIDs or
LABELs at this point.  Then shutdown, add the new drives, and turn it back on.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Boot problems after adding 2 new disks

2019-02-15 Thread Craig Sanders via luv-main
On Sat, Feb 16, 2019 at 01:02:44PM +1100, Andrew Greig wrote:
> I have had some disks "ready to go" for a couple of months, meaning all that
> was required was to plug the SATA cables into the MB. I plugged them in
> today and booted the machine, except that it did not boot up. Ubuntu 18.04,
> it stopped at the Ubuntu burgundy screen and then went black and nowhere
> from that state.
>
> I  shut it down and removed the 2 SATA cables from the MB and booted up -
> successfully.
>
> It is apparent that I lack understanding, hoping for enlightenment

Is your /etc/fstab configured to mount the root fs (and any other filesystems)
by device node (e.g. /dev/sda1), or by the UUID or LABEL?

If you're using device node names, then you've run into the well-known fact
that linux does not guarantee that device names will remain the same across
reboots.  This is why you should always either use the filesystems' UUIDs or
create labels on the filesystems and use those.


The device node may change because the hardware has changed - e.g. you've
added or removed drive(s) from the systems (this is likely to be the case for
your system).  They may also change because the load order of driver modules
has changed, or because of timing issues in exactly when a particular drive
is detected by linux.  They may also change after a kernel upgrade.  Or they
may change for no reason at all.  They are explicitly not guaranteed to be
consistent across reboots.

For over a decade now, the advice from linux kernel devs and pretty much
everyone else has been:

DEVICE NODES CAN AND WILL CHANGE WITHOUT WARNING.  NEVER USE THE DEVICE NODE
IN /etc/fstab.  ALWAYS USE UUID OR LABEL.

BTW, if you want to read up on what a UUID is, start here:

https://en.wikipedia.org/wiki/Universally_unique_identifier


Note: it's not uncommon for device node names to remain the same for months
or years, even with drives being added to or removed from the system.  That's
nice, but it doesn't matter - think of it as a happy coincidence, certainly
not as something that can be relied upon.



To fix, you'll need to boot a "Live" CD or USB stick (the gparted and
clonezilla ISOs make good rescue systems), mount your system's root fs
somewhere (e.g. as "/target"), and edit "/target/etc/fstab" so that it refers
to all filesystems and swap partitions by UUID or LABEL.

If you don't have a live CD (and can't get one because you can't boot your
system), you should be able to do the same from the initrd bash shell, or by
adding "init=/bin/bash" to the kernel command line from the grub menu.  You'd
need to run "mount -o rw,remount /" to remount the root fs as RW before you
can edit /etc/fstab.  Any method which gets you your system's root fs mounted
RW will work.


To find the UUID or LABEL for a filesystem, run "blkid".  It will produce
output like this:


# blkid
/dev/sde1: LABEL="i_boot" UUID="69b22c56-2f10-45e8-ad0e-46a7c7dd1b43" 
TYPE="ext4" PARTUUID="1dbd3d85-01"
/dev/sde2: LABEL="i_swap" UUID="a765866d-3444-48a1-a598-b8875d508c7d" 
TYPE="swap" PARTUUID="1dbd3d85-02"
/dev/sde3: LABEL="i_root" UUID="198c2087-85bb-439c-9d97-012a87b95f0c" 
TYPE="ext4" PARTUUID="1dbd3d85-03"

If blkid isn't available, try 'lsblk -f'.  Both blkid and lsblk will be on a
system rescue disk, but may not be available from an initrd shell.  If udev
has already run, you can find symlinks linking the UUID to the device name in
/dev/disk/by-uuid.

NOTE: UUIDs will *always* exist for a filesystem, they are created
automatically when the fs is created.  Labels will only exist if you've
created them (the exact method varies according to the filesystem - e.g. for
ext4, by using the "-L" option when you create a fs with mkfs.ext4, or by
using "tune2fs" any time after the fs has been created).



Using the above as an example, if your fstab wanted to mount /dev/sde3 as /,
change /dev/sde3 to UUID=198c2087-85bb-439c-9d97-012a87b95f0c - e.g.

  UUID=198c2087-85bb-439c-9d97-012a87b95f0c/ext4
defaults,relatime,nodiratime 0 1

alternatively, if you've created labels for the filesystems, you could use 
something like:

  LABEL=i_root/ext4defaults,relatime,nodiratime 0 1


Do this for **ALL** filesystems and swap devices listed in /etc/fstab.


Save the edited fstab, run "sync", and then unmount the filesystem.  You
should then be able to boot into your system.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Cron jobs + PHP ~ IMAP polling

2019-02-14 Thread Craig Sanders via luv-main
On Thu, Feb 14, 2019 at 09:02:54AM +1000, Piers wrote:
> Is that supposed to be *Deletefile* ?

Does it matter?

It's clearly neither PHP code nor sh code.  It's very obviously pseudocode
intended to show the steps that need to performed to implement a simple
semaphore/lock-file.

craig

--
craig sanders 

BOFH excuse #63:

not properly grounded, please bury computer
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Cron jobs + PHP ~ IMAP polling

2019-02-14 Thread Craig Sanders via luv-main
On Thu, Feb 14, 2019 at 07:17:49PM +1100, luv-main@luv.asn.au wrote:
> 1) Pick the mail out of the inbox in your script(s), or
> 2) deliver each mail directly to a converting filter if your pipeline
>prefers that, and can buffer adequately.

Yes. Break larger tasks into multiple smaller tasks, each of which is either
easily solvable or already has tried-and-tested solutions.

> To attempt to mix up post-arrival grist milling with mail delivery makes no
> sense, I suggest. An inbox makes a fine buffer between the two temporally
> disconnected activities, each with its own fluctuating data rate, but any
> buffer will do.

In other words, the inbox is a queue for other script(s) to process in order,
without having to complicate those scripts by adding a poor imitation of an
already-existing, full-featured, and debugged imap mail fetcher.

Alternatively if using an MDA like procmail or deliver (or some other
programmable MDA), incoming mail can be automatically piped into the
appropriate mail-processing script when it arrives, with pattern rules
allowing mail to be delivered to one (or more) of several different scripts
depending on which pattern(s) any given message matches.

Matchable patterns include headers such as To:, From:, Subject:, X-Mailer:,
and body text (these days, it's worthwhile getting procmail/deliver/whatever
to mime-decode the text/plain or text/html attachments BEFORE trying to match
patterns in the body)

Using existing MDA like this allows your script to just do the one thing it
has to do (i.e. process the mail) without also having to fetch it and/or
decide whether that message is one that should be processed by the script.



In short: don't re-invent the wheel, especially when there are already several
good wheels to choose from, each with decaades of bug-fixes and refinement.
Instead, use multiple small tools, each of which is exceptionally good at the
one thing it does.

craig

--
craig sanders 

BOFH excuse #249:

Unfortunately we have run out of bits/bytes/whatever. Don't worry, the next 
supply will be coming next week.
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Scanning, challenging

2019-02-09 Thread Craig Sanders via luv-main
On Sun, Feb 10, 2019 at 03:50:30PM +1100, Mark Trickett wrote:
> As i have mentioned, I have an Agfa SnapScan, and it appears to have
> died. I have bought a new scanner, and checked compatibility with the sane
> website. The Canon CanoScan LiDE 120 is listed as supported, but with
> scanimage 1.0.27, while the up to date scanimage on Debian 9.7 is 1.0.25.

the sane-utils (v 1.0.27) package in debian sid has the following dependencies:

Depends: adduser, lsb-base (>= 3.0-6), update-inetd, debconf (>= 0.5) | 
debconf-2.0,
 libavahi-client3 (>= 0.6.16), libavahi-common3 (>= 0.6.16),
 libc6 (>= 2.15), libieee1284-3, libjpeg62-turbo (>= 1.3.1),
 libpng16-16 (>= 1.6.2-1), libsane (>= 1.0.27), libsystemd0,
 libusb-1.0-0 (>= 2:1.0.8)

libsane 1.0.27 doesn't have any dependencies.

As long as the other installed library packages are at least equal to the
versions mentioned above, you should be able to just download the .deb files
for sane-utils and libsane and install them with dpkg.  If not, download and
update them (and any of THEIR dependencies) before installing sane-utils.

i.e. download the following from your nearest debian mirror:

.../debian/pool/main/s/sane-backends/libsane_1.0.27-3.1_amd64.deb
.../debian/pool/main/s/sane-backends/sane-utils_1.0.27-3.1_amd64.deb

and install them (as root) with:

dpkg -iBE libsane_1.0.27-3.1_amd64.deb sane-utils_1.0.27-3.1_amd64.deb


(or _i386.deb if you're running on an ancient CPU)




Alternatively, download the debianised source for sane-utils and rebuild for
debian 9.  i.e. make your own backport.


craig

PS: I note that systemd has spread its monstrous tentacles even into userland
apps like sane-utils.  This is why systemd is a menace - it's already nearly
impossible to avoid and will soon be **actually** impossible for distros that
went down the systemd path of insanity to ever disentangle themselves from
RedHat's linux takeover bid.

FFS!  If systemd confined itself to just doing init and didn't assimilate or
infect everything else withing reach, I wouldn't have a problem with it.  But
this shit really makes me hate it.

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: openwrt recommendations

2019-02-01 Thread Craig Sanders via luv-main
On Thu, Jan 31, 2019 at 08:20:09PM +1100, Brett Pemberton wrote:
> It's not what you're asking for, but it'll do the job much better IMHO.
>
> https://www.ui.com/edgemax/edgerouter-x/
>
> I've been using one of these for a few years now and it has been perfect.
> Don't miss OpenWRT/Tomato at all

I was going to say that for the price this sort of thing usually goes for,
you'd be better off building a micro-atx PC with 2 or more ethernet ports -
but scorptec has them for $85, so they're surprisingly good value.

https://www.scorptec.com.au/product/Networking-Wired/Routers/66592-ER-X-AU

I was expecting them to be $300 or more - you can build a pretty decent PC
router running linux for $300, but not for $85 unless you don't care about
size or power consumption and can scavenge a free PC to re-use...and a 2-port
Intel NIC is going to cost at least $100 or $150 (cheaper if you don't mind
using Realtek stuff...some of which is OK, some is garbage.  I've been using
a TPLink branded Realtek 8169 NIC for my ADSL pppoe link, but I want an Intel
gigabit NIC when I upgrade to NBN 100).

"I'd be better off building a PC router" is where I always end up whenever I
look into openwrt stuff...but this looks pretty good.  Good enough to be worth
reconsidering, anyway.

I'll put these on my list of things to consider when I upgrade to NBN myself
soon (FTTC is available in my area), but I expect that the things I want to do
on my router/firewall are beyond what any little box is capable of (fail2ban
is essential for my network but is also a bloated pig, for example...OTOH, I
guess I could get f2b to ssh into the router and run iptables there to do the
banning and unbanning)

For someone who isn't running their own services (web, mail, gitlab, asterisk,
dns, dhcpd, hostapd, and more) on their own /24 network at home, they look
pretty good.




BTW, these edgerouters can also run openwrt:

https://oldwiki.archive.openwrt.org/toh/ubiquiti/ubiquiti_edgerouter_x_er-x_ka

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: GUI problem

2019-01-30 Thread Craig Sanders via luv-main
On Wed, Jan 30, 2019 at 06:58:28PM +, stripes theotoky wrote:

> Where do I start debugging this one? Beats me right now.

/var/log/Xorg.0.log

if there are errors while X is starting up, they'll be in here.  Look especially
for lines with "(EE)" (errors) and maybe also "(WW)" (warnings).


~/.xsession-errors

stderr from window managers, desktop environments, GUI apps, etc is logged here.

there will probably be a lot of "noise" drowning out the signal, making it
difficult to interpret what is going on, or to identify what the root cause is.


craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: postfix boot dependencies with systemd

2019-01-26 Thread Craig Sanders via luv-main
On Sat, Jan 26, 2019 at 10:13:10PM +1100, Russell Coker wrote:
> It appears that the boot of my laptop is delayed by postfix depending on
> network-online.target.  How can I change this?  Postfix is only listening on
> 127.0.0.1 so there's no reason for it to wait until my laptop connects to
> the Wifi network before continuing the boot.
>
> I ran "systemctl edit postfix@-" and "systemctl edit postfix@" and put the
> below in which should make it not depend on network-online.target.  But it
> doesn't change anything.  Any ideas?

is there a loopback only target you can make it depend on rather than 
network-manager?

other than that, my only idea is to dump network manager and manually configure 
your
network with /etc/interfaces.   I've always found that's best, anyway - NM is 
OK-ish
for the simplest of network configs but a complete PITA for anything even 
slightly
complex.

craig

--
craig sanders 

BOFH excuse #44:

bank holiday - system operating credits  not recharged
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Grub in text mode

2019-01-11 Thread Craig Sanders via luv-main
On Fri, Jan 11, 2019 at 10:48:54PM -0500, Stewart Smith wrote:
> The one thing I miss from screen is built in zmodem capture. Okay, so this
> is a bit esoteric these days, and I guess I should not complain about it and
> instead send a patch to tmux :)

I'd forgotten that screen could even do that.  It's been years since I even
used zmodem, possibly decades.

Seems you're not the first to miss it:

https://github.com/tmux/tmux/issues/906
https://github.com/tmux/tmux/issues/1439

The responses so far have been "No way, this is not what tmux is for" and
"tmux has nothing to do with uploading files". The response may be more
positive if accompanied by a working patch :)


If you're going to send a patch, I'd suggest making two generic features
rather than hard-coding support for zmodem.

1. Add an ability to allow tmux to run a program and pipe the output into
the terminal. e.g. run "rz" inside the terminal and then have tmux run "sz
filename" to upload a file. This would have many other uses, like running
"vi" in the terminal and typing "i" or "o" in vi, then having tmux run "cat
filename".

2. Add an ability to pipe all input from the terminal into a command until
that command terminates. e.g. run "sz filename" in the terminal and then
have "tmux" run "rz".

For programs like rz that are able to detect the end of their input, this
isn't terribly complicated, but it would be much more complicated for programs
that just keep on hoovering up stdin until stdin closes - e.g. "cat filename"
inside the terminal while tmux runs "xsel -i". Another tmux command to
manually close the input to the running external program would be clumsy but
workable.


3. An obvious third feature would be programmable triggers to run commands.
e.g. if tmux sees a particular sequence of characters, then run a particular
program. e.g. "**B00" could trigger running "rz".  I expect that
this would probably be quite difficult, and would probably have a noticable
impact on tmux's performance.



BTW, there's an ssh wrapper called zssh:

Package: zssh
Depends: lrzsz, openssh-client | telnet | telnet-ssl, libc6 (>= 2.27), 
libreadline7 (>= 6.0)
Description-en: interactive file transfers over ssh
 zssh (Zmodem SSH) is a program for interactively transferring files to a
 remote machine  while using  the secure  shell (ssh).  It is intended to
 be a convenient  alternative to scp, allowing to  transfer files without
 having to open another session and re-authenticate oneself.
 .
 Files are transferred through the zmodem  protocol,  using the rz and sz
 commands.
Homepage: http://zssh.sourceforge.net/




The main benefit I can see (for either zssh, or zmodem support in tmux) is
that due to firewalls, NAT, etc it's often MUCH hard to connect back to the
origin host with scp or ftp or whatever, than it was to connect from the
origin host to the remote host in the first place.  OTOH, that's one of the
uses for ssh port-forwarding.

There's a Q on stackoverflow showing how to do it even from within your
current ssh session (i.e. without having to set up port-forwarding when you
started ssh) with ssh tilde commands.

https://stackoverflow.com/questions/25482099/how-to-scp-back-to-local-when-ive-already-sshed-into-remote-machine

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Many to many VPN

2019-01-10 Thread Craig Sanders via luv-main
On Thu, Jan 10, 2019 at 05:41:50PM +1100, russ...@coker.com.au wrote:
> If you have servers in multiple countries and people using those servers in
> multiple locations what's a good way of setting up a VPN?
>
> Any ideas?

simplest method would be to set up a VPN service with the same user
credentials on all servers and let users choose which one to connect to,
depending on where they are. Document the recommended VPN server for each
region. Maybe also provide a shell script which pings each one to find the
fastest/closest.

LDAP's probably overkill unless you're already using it for other stuff.
rsyncing the user credential file(s) from a "master" location to all other
servers would work.

> Support is required for Linux servers and Linux, Windows, and OS/X clients.

maybe a powershell ping script too.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Grub in text mode

2019-01-09 Thread Craig Sanders via luv-main
On Tue, Jan 08, 2019 at 11:51:33PM +1100, Russell Coker wrote:
> Currently I run my kvm VMs under screen

sounds like a PITA doing everything manually...but whatever works for you.

> and just use screen -r to get the console.

I strongly recommend switching to tmux.  I stuck with screen for many years,
been using it since the early 90s, but finally made the switch about a year
ago...finally got sick of screen's bugs, quirks, piss-poor unicode support,
and effective abandonment as an actively-developed project. it took me about
half an hour to configure it so that the transition was non-traumatic. after
a few days, i wouldn't even consider switching back, any more than i'd switch
back from mutt to elm.

As I did with screen, I've mapped tmux's escape key to ^K.  I rarely use that
for anything else but I use ^A all the time -- ^A is move to start of line
in bash/readline, an extremely stupid key for screen to hijack as its default
control prefix. tmux's default of ^B is better, but I've got used to ^K over
the years.  Unlearning that would be too painful.

> Virsh has some benefits, but so far it hasn't seemed worth the pain.

for just "virsh console", no not worth it. for everything else - virsh and
libvirt are definitely worth it.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Grub in text mode

2018-12-21 Thread Craig Sanders via luv-main
On Fri, Dec 21, 2018 at 04:32:20PM +1100, Russell Coker wrote:
> Recently Grub has been changing to a high resolution mode. On some systems
> this is really slow, presumably due to having a crap BIOS. On kvm/qemu
> systems it doesn't work with -display curses.
>
> How do I get grub to stick to 80x25 text?

On Debian, Ubuntu etc, edit /etc/default/grub and

1. un-comment the line:

GRUB_TERMINAL=console


2. optionally comment out the line starting with:

GRUB_GFXMODE=

(i can't remember if this is required or not.  I think it isn't)


3. save & exit, run "update-grub"

On non-debian machines, dunno.  probably fuck around with files in /etc/grub.d/


BTW, for kvm you might also want to enable a serial console.  I usually enable
two serial consoles per VM, one to log the VM's boot up to a text file on
the host, the other for console access with 'virsh console' (this is also
how console access is provided in a javascript web by openstack and similar
virtualisation wrapper systems).  Remember to actually configure both grub and
a getty (via inittab or systemd) to use the serial console as well as adding
the console "hardware" to the VM.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: LAN-only chat?

2018-11-01 Thread Craig Sanders via luv-main
On Thu, Nov 01, 2018 at 11:09:36PM +1100, Nick Evans wrote:
> Potentially i have miss interpreted your question but we are currently
> looking into Rocket Chat..

looks like even more of an overkill than running ircd or something.

really, all i want is something simple for me and my partner to pass
quick notes to each other occasionally (probably not even a few times per
month)...mostly notes that aren't worth even the bother of spending 30 seconds
sending a quick email.


pidgin's bonjour plugin would be more than adequate if it was reliable -
minimal configuration, no server required. chat peers just find each other on
the LAN. when it works, it's good.

the trouble is that it often loses the ability to see or communicate with
other clients. can usually be resolved by restarting both clients, but having
to yell out "hey restart pidgin, i want to send you something" defeats the
purpose of not wanting to yell out or interrupt my partner when she's busy.

> Essentially host your own Slack.

this isn't a selling point for me. slack seems to only have a purpose if you
pretend that things like irc and xmpp and email don't exist.

just one more "cool" reinvention of the wheel with obligatory tie-ins to
yet another corporate overlord, another great opportunity to turn myself
into a product. and fuck yeah, why not do it with nodejs or some other
security-nightmare dependency on untrustworthy code repositories?


> https://rocket.chat/
>
> The bundles (server/client) are super easy to deploy and you can have
> win/linux/mac/android/IOS clients.

seems like a lot more work than just running ircd or jabberd.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


LAN-only chat?

2018-10-31 Thread Craig Sanders via luv-main
Anyone know of a decent local-LAN chat program?

I've tried the Bonjour (i.e. avahi) module for pidin but it's just unreliable.

I don't know whether it's pidgin that's the problem or whether it's because
avahi is more half-arsed Poettering garbage, but I've spent hours fucking
around with it on multiple occasions, think i've got it working OK, and then
the next time either I or my partner try to use it to send notes to each
other, it just doesn't fucking work.

I suspect, but am not sure, that either pidgin or avahi gets confused because
all machines on my LAN have multiple addresses on different subnets - this is
too useful for VMs and docker images and other stuff to even consider changing
just for a chat program.

If I had to, I could set up an irc or xmpp server or something but that seems
like overkill for this.

So, can anyone recommend a no-frills, no-fuss LAN chat that just works?  Even
tips on making pidgin + bonjour work reliably would be great.

Preferably something better than the ancient ytalk.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Dropbox and zfs

2018-10-21 Thread Craig Sanders via luv-main
On Sun, Oct 21, 2018 at 11:49:40AM +, stripes theotoky wrote:

> As of November Dropbox are dropping support for Linux file systems other
> than ext4, as I run zfs this is less than helpful.
>
> I am wondering as a temporary measure until I move to Pcloud if it would be
> possible to format a 64GB SD card as ext4, copy the contents of my Dropbox
> folder to it and then mount it at /home//Dropbox.

yes, entirely possible.

> Assuming this works it would leave the system drive as zfs and Dropbox in
> its own ext4 flashcard.
>
> Are there better ways to do this and if so how?

My partner needs to use dropbox to share files with people at work, and
her desktop machine here runs ZFS.  When dropbox announced this ext4-only
thing a few months ago, I just created a 20GB ZVOL (with "zfs create -V
20G poolname/dropbox"), formatted it as ext4, mounted it as ~/dropbox in
/etc/fstab (which is what the dropbox client is configured to use), and
chown-ed the mounted fs to her uid & gid.

It works without any problems.  As far as dropbox (or anything else) is
concerned, the mounted fs is an ext4 filesystem no different to any other ext4
fs - a ZVOL is just another block device, one that happens to be provided by a
zfs pool.

The ZVOL is snapshotted regularly and backed up with 'zfs send' to the
"backup" pool on my mail zfs server.  It also benefits from the usual ZFS
features like error-detection and correction, compression, etc.



If it turns out that 20GB isn't enough, the ZVOl can be grown with 'zfs set
volsize=...' and the ext4 fs on it can then be resized with 'resize2fs'

I didn't bother putting a partition table on the ZVOL but if I had, the ext4
partition would have to be resized with fdisk or similar after resizing the
ZVOL but before resizing the ext4 fs.


> What will happen with the zfs snapshots that the box takes every minute?

The SD card isn't part of zpool so won't be snapshotted. if you're rsyncing or
copying it to ZFS, that's only a read (remember to use the noatime or relatime
mount option), not a write so won't affect the SD card's lifespan.

> What is the life expectancy of an SD card used like this?

I don't know what kind of write frequency the dropbox client does, and life
expectancy aka write endurance depends on the quality of the SD card.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Apache + Open Office Headless

2018-09-26 Thread Craig Sanders via luv-main
On Wed, Sep 26, 2018 at 10:25:29AM +1000, Piers wrote:
> I have a VM with 16GB RAM and 8 Cores. It's job is to accept a HTTP request
> using PHP, take the data (documents) and run open office to convert it to
> plain text and return it. Up until now this process has been fine. It seems
> that the server hangs as it just gets overloaded.

What kind of document files are they?

If they're OO or LO to begin with, you may be able to extract what you need
from them without running open office - OO files are zip files containing a
bunch of XML files, with the most interesting/useful one being "content.xml"

If you don't need the entire file converted and just want some specific bits
of data from it, you could use xmlstarlet[1] or similar XML parsing tool
to extract data from the unzipped xml file(s).  Don't try to use regular
expressions for this, REs are not capable of robustly extracting data from
XML (you can hack something up that can be made to work with the particular
sample data files you're testing with, but it will be extremely fragile and
even minor changes in the structure of the input file can and will break
your script), use and XML parser.  If you really insist on not using an XML
parser then at least use xml2[2] to convert the XML to a line-oriented format
suitable for use as input to sed/grep/awk/etc.

Or write a perl or python script to do it. Both languages have libraries for
working with open document files, zip files, xml files, etc.  There are also
libs for working with various versions of microsoft's office file formats.
Other languages may also have similarly useful libraries.


Also worth looking at, pandoc[3] is capable of converting between many
different text document formats (incl. odt, docx, various flavours of
markdown, html, rst, plain text and others).


If the files are pdf then you could use pdftotxt from poppler-utils[4].



[1] http://xmlstar.sourceforge.net/

[2] AFAIK, xml2 doesn't currently have a home page, and hasn't had one for
years. It's packaged for Debian and probably other distros and if you need the
source code, your nearest debian mirror is probably the best place to find it.

https://tracker.debian.org/pkg/xml2

[3] https://pandoc.org/

[4] http://poppler.freedesktop.org/


> I've just changed the cron jobs from the sending servers to space them apart
> and also upped max / min spare servers.
>
> Previously I have tried other approaches like the JAVA headless and running
> OO as services and using HA Proxy. They haven't been successful (could have
> been my implementations).
>
> The script loads its own version of OO each time a connection is made. Is
> there a better way of doing this? Seems like an awfully big VM to falls
> over/hang all the time.
>
> Your help / ideas / rants are appreciated.

Running OO to convert files seems like overkill unless there's no other option
- many file formats have small, standalone tools for converting them to other
formats and/or extracting data from them.  And several more have libraries for
reading and/or writing them in common languages like perl or python.  This is
almost always a better option than using a heavyweight process like OO or LO
to do the conversion.

If converting with OO is the only option, then I'd suggest using Libre Office
instead of Apache Open Office.  While Open Office still gets some development
effort and attention, what little it gets is completely overshadowed by
developer effort on LO.  LO is years ahead of OO - by comparison, OO is
effectively abandonware.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: expect script

2018-08-19 Thread Craig Sanders via luv-main
On Mon, Aug 20, 2018 at 01:31:26PM +1000, Craig Sanders wrote:
> Perl has several to choose from - the most generic is Expect.pm[1].  There's
> also application-specific ones like Device::SerialPort[2]. or for specific
> network protocols with, e.g., Net::SSH[3] and Net::Telnet[4].
> 
> Device::SerialPort has some basic expect-like functionality built in (e.g. the
> "lookfor" method), or can be used with Expect.pm.

Another one worth looking at is Net::CLI::Interact

https://metacpan.org/release/Net-CLI-Interact

Don't be fooled by the Net:: prefix, it works for serial connections too - the
tutorial says:


Introduction

Automating command line interface (CLI) interactions is not a new idea, but
can be tricky to implement. Net::CLI::Interact aims to provide a simple and
manageable interface to CLI interactions, supporting:

 * SSH, Telnet and Serial-Line connections

 * Unix and Windows support

 * Reusable device command phrasebooks

The module exists to support developers of applications and libraries
which must interact with a command line interface. The SYNOPSIS section of
Net::CLI::Interact has an overview of the commands demonstrated in this
document.


This has sub-modules for dealing with specific hardware vendors and models
(e.g. cisco)

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: expect script

2018-08-19 Thread Craig Sanders via luv-main
On Mon, Aug 20, 2018 at 10:29:46AM +1000, cory seligman wrote:
> I'm having some trouble making expect work.

expect itself is a complete PITA.  IMO you are better off using one of the
expect-like modules for a general purpose programming language like perl
or python.

Also IMO, the time and effort you put into learning expect are better used
learning a general purpose language - expect has only one use which you'll
probably only need a few times in a decade at most, while perl or python or
whatever has any number of uses.

The only reason to use expect is because a) you are already an expert in it,
or b) you like tcl.  Everyone else should use their favourite programming
language.


Perl has several to choose from - the most generic is Expect.pm[1].  There's
also application-specific ones like Device::SerialPort[2]. or for specific
network protocols with, e.g., Net::SSH[3] and Net::Telnet[4].

Device::SerialPort has some basic expect-like functionality built in (e.g. the
"lookfor" method), or can be used with Expect.pm.

Python has Pexpect[5]

google for "yourlanguage expect" for other languages.


[1] https://metacpan.org/release/RGIERSIG/Expect-1.21
[2] https://metacpan.org/pod/Device::SerialPort
[3] https://metacpan.org/release/Net-SSH
[4] https://metacpan.org/release/Net-Telnet
[5] http://www.noah.org/wiki/Pexpect

craig

ps: i used to know expect.  I gave it up about 20 years ago when I first
discovered Net::Telnet and now use perl now whenever i need to do that sort of
thing.

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


  1   2   3   >