Re: Fixing the LUV list?

2015-12-04 Thread Craig Sanders via luv-main
On Sat, Dec 05, 2015 at 04:34:40PM +1100, Joel W. Shea via luv-main wrote:
> > http://lists.luv.asn.au/pipermail/luv-talk/2015-November/003584.html
> 
> In that thread, I have attempted to convince the list administrator to
> use dmarc_moderation_action *instead of* from_is_list, as recommended by
> the Mailman documentation; effectively only rewriting the "From:" field
> where necessary, and I believe that's an acceptable compromise to many
> more people than the current solution. 

that's a not-unreasonable compromise. it only mangles messages where
the alternative would be to have it bounce or be discardedand, most
importantly, doesn't munge other messages where DMARC is not an issue.


> Only where the message has been sent from a domain with a restrictive
> DMARC policy like dmarc_moderation_action does; not every single
> message, as from_is_list does.

i'm in favour of it, as a compromise.  i'd prefer not to mangle
mail just to pander to giant corporations broken non-standards,
but at least dmarc_moderation_action seems to do minimal damage.

craig

-- 
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: Is my root partition dying?

2015-12-16 Thread Craig Sanders via luv-main
On Thu, Dec 17, 2015 at 01:09:46AM +0700, Robert Parker via luv-main wrote:

> > Thanks Rick, I actually rsync everything to an local external drive
> > daily
> >
> > Well I hope you are not doing it with the -delete option in place
> > because
>
> if you are it will faithfully remove from your backup set everything
> that has gone missing from your source drive.

more importantly, using rsync's --delete option won't leave cruft from
uninstalled packages and other deleted files strewn all over your
filesystem.

i made the mistake of forgetting to use --delete on an rsync transfer
of one of my systems' OS disk to a new disk once, didn't discover it
until after i'd made the final swap to the new disk. left me with an
enormous mess that took over a year of gradual cleanups plus a final
concerted effort involving find and several custom scripts to process
/var/lib/dpkg/info/* files to tidy up the mess. even now i'm not 100%
sure i've got it all.

and yes, all that cruft did cause numerous problems. inevitable, really,
with extra crap like partial packages, obsolete libs and binaries.


don't try this at home, it'll suck.


craig

-- 
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: Is my root partition dying?

2015-12-16 Thread Craig Sanders via luv-main
On Thu, Dec 17, 2015 at 11:42:58AM +1100, Craig Sanders via luv-main wrote:
> more importantly, using rsync's --delete option won't leave cruft from
> uninstalled packages and other deleted files strewn all over your
> filesystem.

this applies to upgraded packages too.

without --delete, rsync, upgrade, rsync will leave parts of the OLD
versions of the packages on the rsync target.

craig

-- 
craig sanders <c...@taz.net.au>
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: No networking and no MATE

2016-01-02 Thread Craig Sanders via luv-main
On Sat, Jan 02, 2016 at 07:57:13PM +1100, Craig Sanders wrote:

> Optionally add '| grep -Ev "pkg1|pkg2|pkg3..."' after the sed but
> before the close-parenthesis if you want to exclude particular
> packages from being re-installed.

personally, i'd exclude at least '^lib' so that lib packages are marked
as auto-installed by apt rather than manually installed...when they are
obsoleted by future upgrades they can then be removed automatically with
'apt-get --purge autoremove'

craig

-- 
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: No networking and no MATE

2016-01-02 Thread Craig Sanders via luv-main
On Sat, Jan 02, 2016 at 07:35:46PM +1100, David Zuccaro wrote:

> No the question is which packages will give me networking back?
> 
> apt says network-manager has no installation candidate.

You need to find out which packages you removed.

Try:

grep remove /var/log/dpkg.log

Or if you know the packages were removed on a certain date
(e.g. 2015-12-30), you can get a list of just the package names by
printing only the 4th field of the log file.

awk '/^2015-12-30 ..:..:.. remove / {print $4}' /var/log/dpkg.log | sed -e 
's/:all//'

(the sed command strips ':all' from package names. apt-get copes with
architecture specification like :amd64 or :i386 but barfs on :all)

That list can be used directly with apt-get with:

apt-get -d -u install $(awk '/^2015-12-30 ..:..:.. remove / {print $4}' 
/var/log/dpkg.log | sed -e 's/:all//')

Run again without the '-d' (download-only) option when you are sure it's
only going to install the stuff you want.

Optionally add '| grep -Ev "pkg1|pkg2|pkg3..."' after the sed but before
the close-parenthesis if you want to exclude particular packages from being
re-installed.


NOTE: if dpkg.log has been rotated since you removed the packages, try
dpkg.log.1 or dpkg.log.2.gz etc.

craig

-- 
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: No networking and no MATE

2016-01-02 Thread Craig Sanders via luv-main
On Sat, Jan 02, 2016 at 07:00:57PM +1100, David Zuccaro wrote:
> I have accidentally removed some packages from a debian system and now
> I have lost networking and MATE. Can I reinstall the removed packages
> from the install cd?

yes.

you can either install packages individually with dpkg (which will be tedious 
and annoying)
or you need to configure apt to look for packages on the cdrom.

according to the debian web page, the easiest way to do that is:

https://wiki.debian.org/SourcesList#CD-ROM

CD-ROM

If you'd rather use your CD-ROM for installing packages or updating
your system automatically with APT, you can put it in your
/etc/apt/sources.list. To do so, you can use the apt-cdrom program
like this:

# apt-cdrom add

with the Debian CD-ROM in the drive.

You can use -d for the directory of the CD-ROM mount point or add a
non-CD mount point (i.e. a USB keydrive).


dunno if the CD has to be mounted first or not.


craig

-- 
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: No networking and no MATE

2016-01-02 Thread Craig Sanders via luv-main
On Sat, Jan 02, 2016 at 09:59:27PM +1100, David Zuccaro wrote:

> Thanks once again Craig and Brian I owe you a beer!

i can't drink beer (i only just got a kidney, transplanted on Dec 7 -
and don't want to put it at risk by re-acquiring an alcohol habit) but
if you want to repay the favour, please add a blank line before your
replies to quoted text. and another blank line before the next slab of
quoting. and break up long paragraphs into multiple short paragraphs
(where it makes sense to do so).

it's really hard to read your replies because it's hard to tell where
the quote ends and your reply starts.

blank space is free and makes things much more readable.

craig

-- 
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


blessed silence (was Re: free Sun 1RU servers)

2016-01-06 Thread Craig Sanders via luv-main
On Tue, Jan 05, 2016 at 07:46:34PM +1100, Russell Coker wrote:
> The servers are extremely noisy which is one of the reasons why I
> haven't done any other training on the server in my home (the other
> reason being that I have some other servers for this purpose).

FYI, i saw this while reading the FAQ on http://quietpc.com.au/ - sound
dampening server racks. probably horribly/scarily expensive.

http://www.acoustiproducts.com/en/quiet_rack_cabinets.asp

(alternatively, some of the fans in the servers **may** be replaceable
with quiet fans - many available from Quiet PC)


craig

-- 
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: blessed silence (was Re: free Sun 1RU servers)

2016-01-07 Thread Craig Sanders via luv-main
On Fri, Jan 08, 2016 at 03:28:01AM +1100, Russell Coker wrote:
> On Thu, 7 Jan 2016 04:18:31 PM Craig Sanders via luv-main wrote:
> > (alternatively, some of the fans in the servers **may** be
> > replaceable with quiet fans - many available from Quiet PC)
>
> 1RU systems have small diameter fans which have to spin at high speeds
> to get enough air flow which is always going to make more noise than
> the larger fans in 2RU servers or tower systems.

umm, yesbut it's not hard to find 40 or 60mm fans that are MUCH
quieter and better quality than the ones that come as standard in
servers, without sacrificing airflow. it's not even hard to find small
fans that are quieter, better quality, and move more air than the
standard supplied fans.

as Rick said, server manufacturers typically don't bother optimising for
low fan noise because most customers don't care - the servers will be
going into a noisy server room anyway.


e.g. i just ordered a few "Scythe 40mm Mini Kaze Silent Fan (20mm Deep)"
for $12.73 each from umart.

http://www.umart.com.au/umart1/pro/Products-details.phtml?id=10=229=2=90817

4.11 CFM (7 m3/hour) @ 14dbA (3500 rpm)

They also have some Noctua fans that run at 4500 rpm that move 8.2
m3/hour at 17.9dbA, price $23 and $29

there are numerous other options available if you google for "quiet PC
fans 40mm".


> As for the Sun servers I'm offering.  They are old and somewhat slow
> by today's standards.  If they can work for someone as they currently
> are then that's great.  If they need to have anything done with them
> then it's probably not worth it.

yep, sureand whoever takes them off your hands may be interested in
getting them to run quieter and cooler, with fans that don't wear out
quickly because they have crappy sleeve bearings.

personally, i'd just build my own server with commodity PC hardware in
a rack-mount case (and replace the fans, of course).

craig

-- 
craig sanders <c...@taz.net.au>
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


zfs snapshot tools

2015-12-21 Thread Craig Sanders via luv-main
On Mon, Dec 21, 2015 at 11:25:29AM +1100, Trent W. Buck via luv-main wrote:
> One thing rsnapshot does reasonably well is faking multiple tape
> rotations within the snapshot set.  e.g. you say "1 yearly, 2 monthlies,
> and 7 dailies", and it works out which snapshots to expire.
> I don't know how to do that in btrfs/zfs land.

personally, i rolled my own simple snapshot rotation script for zfs, but
One Of These Days i'll probably switch to either zfsnap:

Package: zfsnap
Version: 1.11.1-3
Installed-Size: 48
Maintainer: John Goerzen 
Architecture: all
Depends: zfs-fuse | zfsutils | zfs, bc
Description-en: Automatic snapshot creation and removal for ZFS
 zfSnap is a simple sh script to make rolling zfs snapshots with cron. The main
 advantage of zfSnap is it's written in 100% pure /bin/sh so it doesn't require
 any additional software to run.
 .
 zfSnap keeps all information about snapshot in snapshot name.
 .
 zfs snapshot names are in the format of Timestamp--TimeToLive.
 .
 Timestamp includes the date and time when the snapshot was created and
 TimeToLive (TTL) is the amount of time for the snapshot to stay alive before
 it's ready for deletion.
Description-md5: 43c80483bf622b9e3c64221fe60f1f09
Homepage: https://github.com/graudeejs/zfSnap


there are several similar snapshotting tools available, easily found
with googling for 'zfs snapshot'.



BTW, see also simplesnap:


Package: simplesnap
Version: 1.0.3
Maintainer: John Goerzen 
Architecture: all
Depends: zfs-fuse | zfsutils | zfs, liblockfile-bin
Suggests: zfsnap
Description-en: Simple and powerful network transmission of ZFS snapshots
 simplesnap is a simple way to send ZFS snapshots across a net-
 work. Although it can serve many purposes, its primary goal is
 to manage backups from one ZFS filesystem to a backup filesystem
 also running ZFS, using incremental backups to minimize network
 traffic and disk usage.
 .
 simplesnap it is designed to perfectly compliment
 snapshotting tools, permitting rotating backups with arbitrary
 retention periods. It lets multiple machines back up a single
 target, lets one machine back up multiple targets, and keeps it
 all straight.
 .
 simplesnap is easy; there is no configuration file needed. One
 ZFS property is available to exclude datasets/filesystems. ZFS
 datasets are automatically discovered on machines being backed
 up.
 .
 simplesnap  is robust in the face of interrupted
 transfers, and needs little help to keep running.
 .
 nlike many similar tools, simplesnap does not
 require full root access to the machines being backed up. It
 runs only a small wrapper as root, and the wrapper has only three
 commands it implements.
Homepage: https://github.com/jgoerzen/simplesnap


craig

-- 
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: Fixing the LUV list?

2015-12-19 Thread Craig Sanders via luv-main
On Sat, Dec 19, 2015 at 04:04:45PM +0100, Anders Holmström via luv-main wrote:
> As mentioned it was on luv-talk. As I recall there was no announcement; the
> change was implemented without warning and /then/ there was some discussion.

By "announcement", i mean Russell said "I'm doing this" and gave his reasons.

given that he's the list admin, that counts as an announcement to me, regardless
of the fact that i disagree with both his decision and his reasons.

craig

-- 
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: Fixing the LUV list?

2015-12-18 Thread Craig Sanders via luv-main
On Sat, Dec 19, 2015 at 04:18:37PM +1100, Erik Christiansen via luv-main wrote:

> And I concur, holding secret discussions on another list is not an
> acceptable substitute to addressing this list's problems here.

they weren't secret discussions.  luv-talk is a public list, and 
the changes were made to that list first, and then on luv-main.


it wasn't exactly a discussion, either.  an announcement was
made and a few people (including myself) objected.

craig

-- 
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: Is my root partition dying?

2015-12-18 Thread Craig Sanders via luv-main
On Fri, Dec 18, 2015 at 09:17:49PM +1100, Russell Coker via luv-main wrote:
> On Fri, 18 Dec 2015 08:58:56 PM David Zuccaro via luv-main wrote:
> > Anyone know where I can buy a >= 2TB disk in the Elwood area?
> 
> http://www.msy.com.au/stores
> 
> MSY has a store in Malvern.  MSY generally has low prices, good service, and 
> a 
> reasonable stock of everything that's common.

good service if what you want to do is just buy stuff with no fuss,
and without needing any advice or to ask anything but the most basic
questions (like "how much?"). they're great if you know what you want
and want it at a good price, but IME they're not in the least bit
focussed on answering questions.

this is good for me because i'd rather not pay extra to have "service"
staff on hand who know only a tiny fraction of what i know, or am
capable of researching online, anyway.


craig

-- 
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: automatically starting KVM

2015-12-23 Thread Craig Sanders via luv-main
On Thu, Dec 24, 2015 at 01:07:50PM +1100, Russell Coker via luv-main wrote:

> I've editited XML by hand before and written scripts to do it back
> when I was working on clustering software which also had the flaw of
> requiring XML but provided no automated way of creating it.

virsh has many subcommands for manipulating the XML files, as well as just
editing them in $EDITOR

> Getting KVM working from the command line is easy enough (for
> definitions of easy that include a 500 character command).  But how do
> you start it on boot and keep it running?

on debian, install the libvirt packages libvirt-bin, libvirt-clients,
libvirt-daemon, libvirt-daemon-system and libvirt-doc

would be similar on other distros.

craig

-- 
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: SPF and DKIM checks on mail to the list

2015-12-23 Thread Craig Sanders via luv-main
On Wed, Dec 23, 2015 at 07:03:48PM -0800, Rick Moen via luv-main wrote:
> > Without testing, i'd guess that renaming Reply-To: to From: could be:
> > 
> > # rename Reply-To: header to From:
> > :0 fhw
> > * 
> > (^TO|^FROM|^FROM_DAEMON|^Sender:|^X-Been-There:|^List-[^:]*:).*@(lists.)?luv\.asn\.au
> > | /usr/bin/formail -R Reply-To From -U From
> 
> Seems to work a treat, thanks!
> 
> I particularly like that my attribution string (as above) is back to
> being correct.

thanks Mr guinea-pig, now that i know it works i'll use it myself :-)

craig
Research Fellow,
Evil Mad Scientists Pty. Ltd.

(who needs an ethics committee anyway)

-- 
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: SPF and DKIM checks on mail to the list

2015-12-23 Thread Craig Sanders via luv-main
On Wed, Dec 23, 2015 at 04:10:04AM -0800, Rick Moen via luv-main wrote:
> Quoting Joel W. Shea via luv-main (luv-main@luv.asn.au):
> 
> > Yes, but many, including myself; are frustrated by how their MUA
> > behaves as a result, and some even resorting to custom procmail
> > rules to work around it.
>
> Anyone care to share a tested procmail recipe to (locally here, after
> receipt) retroactively correct From: to what the sender specified,
> then remove the spurious MLM-added Reply-To: line?  Thanks in advance.

I don't have a working and tested rule but it should be possible to get
procmail to:

1. delete the From: header
2. rename the Reply-To: header to From:

Unfortunately, there's no way of restoring any original Reply-To header
- the list is now configured to destroy that.



in my case, I already have a rule from years ago renaming Reply-To
to X-Old-Reply-To to reduce the annoyance of previous manifestations of
Reply-To munging abominations:

# rename Reply-To header
:0 fhw
| /usr/bin/formail -R Reply-To X-Old-Reply-To


Without testing, i'd guess that renaming Reply-To: to From: could be:

# rename Reply-To: header to From:
:0 fhw
* 
(^TO|^FROM|^FROM_DAEMON|^Sender:|^X-Been-There:|^List-[^:]*:).*@(lists.)?luv\.asn\.au
| /usr/bin/formail -R Reply-To From -U From

or possibly:

| /usr/bin/formail -R Reply-To From | /usr/bin/formail -U From

(i.e. rename Reply-To to From, and then delete all but the *last* From:
header)

craig

-- 
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: changes to mailing list

2015-11-25 Thread Craig Sanders via luv-main
On Thu, Nov 26, 2015 at 09:08:50AM +1100, Tony Langdon via luv-main wrote:
> On 25/11/2015 9:13 PM, Erik Christiansen via luv-main wrote:
> 
> > Interestingly, 'r', 'g', and 'L' all work correctly on Tony's posts, &
> > Rick's, and Brian's. (And yet, Reply-To: is similar in all. Tried it 5
> > times, and can't for the life of me see why yours behave differently.)
> 
> Same result here.  Only Craig's posts don't allow me to reply to all.
> This function worked properly with your post just now, Erik.

now that's really bizarre. 

I can understand why my MUA (mutt) has difficulty with Reply-To (because
I have a procmail rule to rename that header to X-Old-Reply-To to defeat
the Reply-To munging of some lists, so there is no Reply-To header for
mutt to use).

But there's no reason why a message I send to this list should be any
different to a message sent by someone else.

I do make a habit of trimming the CC: header unless I really want to
send a CC to someone, which doesn't happen often - I usually want to
send either a list reply or a private reply, almost never both.


craig

-- 
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: changes to mailing list

2015-11-25 Thread Craig Sanders via luv-main
On Wed, Nov 25, 2015 at 07:45:33PM +1100, Craig Sanders via luv-main wrote:
> but not on this list any more.  The From: and Reply-To: headers
> are both messed up.

All three reply styles - 'r', 'g', and 'L' - reply to the list and zero
other addresses.

The 'r' private reply would probably work if i didn't have long-standing
procmail rules to partially work around the damage of Reply-To munging.

craig

-- 
craig sanders <c...@taz.net.au>
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: blessed silence (was Re: free Sun 1RU servers)

2016-01-08 Thread Craig Sanders via luv-main
On Fri, Jan 08, 2016 at 02:37:10PM +1100, Rohan McLeod wrote:

> But what I finished up getting was a Corsair H55 Liquid Cooler
> see for example.
> http://www.msy.com.au/vic/northmelbourne/pc-accessories/12163-corsair-cwch55-h55-universal-hydro-high-performance-liquid-cpu-cooler.html
> 
> This seems like a very elegant solution, the pump and heat absorber
> are lightweight, sit close to the CPU and have a very solid
> conection to the M/B. The radiator sits under a 120mm case fan
> (non-proprietary); and the result with a Asus P6T M/B with 20GB of RAM
> and Intel-i7 quad-core CPU is an almost silent box and a CPU which is
> hard to push over 45 deg C

nice.  $82 isn't bad for a plug-and-play kit that you don't have to
mess around with cutting tubing and crap (which has always put me off
water-cooling before, too much hassle...and too much chance of screwing
it up and soaking my m/b)

the stock fans on my AMD 1090T CPUs are getting a bit old and noisy
now (i've had them a few years - waiting for AMD Zen[1] to be released
before i consider upgrading. the FX-8320 and FX-8350 are nice, but not
$217/$257 worth of nice) so will need replacing with something better.


[1] allegedly Q4 this year.   https://en.wikipedia.org/wiki/AMD_Zen

> ps shouldn't this be on Luv-talk ?

i'd say no - linux users build and upgrade machines, and cooling's an
important thing to know about.

craig

-- 
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: blessed silence (was Re: free Sun 1RU servers)

2016-01-08 Thread Craig Sanders via luv-main
On Thu, Jan 07, 2016 at 01:47:35PM -0800, Rick Moen wrote:
> Yes, this annoyance goes _way_ back.  Even back in XT clone days, we
> hobbyists noticed that generic Taiwanese clone gear was greatly more
> standardised, and easier / more inexpensive to work on, than any of the
> brand-name gear.  Typically much more IBM-compatible, too (which then
> mattered).

yeah, i remember.  it's when i first developed my dislike of name-brand
PC gear.  the whitebox clone stuff was so much less hassle.  also better
and faster for a LOT less money.


> > > With the aftermarket fan, it's ultra-cool and almost completely
> > > silent, and some day I hope to make it a 'stealth' modern
> > > workstation with a 2010s
> > 
> > why 2010 when 2015/2016 stuff is so much better, cooler, less power,
> > and faster?
> 
> I feel bad following up to say this, but what I said and meant was
> '2010s', i.e. this decade, not 2010 the year.

sorry, i misread that.

craig

-- 
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: SSHD (and steam and windows and other stuff)

2016-01-13 Thread Craig Sanders via luv-main
On Wed, Jan 13, 2016 at 01:28:57PM +1100, Trent W. Buck wrote:
> AFAICT the only reason SSHDs exist are:
> 
>  * Windows has nothing like bcache/l2arc; or

it does. it's called ReadyBoost. Compared to bcache/flashcache or L2ARC
for ZFS, it sucks.  You can't just tell Windows to use an SSD (or
partition thereof) to cache an arbitrary diskwell, in theory you
can but in practice Windows itself decides whether that option will be
available by its own inscrutable and undocumented method.

I discovered this just last week after upgrading my win7 games box to
have an SSD as a boot disk, decided to try ReadyBoost for my main 2TB
steam library drive (not an SSHD) so made a 40GB partition for it.  No
matter what I tried, I couldn't get windows to make the option available
in its Disk Manager GUI - it was there, just greyed out.

I gave up and expanded the main partition...will have to manually move
big games to the fast SSD, which works nicely - witcher 3 loads much
faster off the SSD. More importantly, save games load in about 25
seconds rather than 60-90 seconds (which is really annoying when you die
repeatedly because you thought attacking a royal griffin seemed like a
good idea, 90 seconds to load the save, 15 seconds to die again while
trying to run away, repeat until you succeed or rage-quit)

Microsoft seems to have abandoned the ReadyBoost idea. and they seem to
be confused about what SSD caching of hard disks is good for - their
documentation is all about how it might help on low-memory machines
(<4GB), presumably because they don't have enough RAM to cache large
chunks disk in RAM. as bcache etc on linux prove, SSD caching is
beneficial even on machines with lots of RAM.

windows is so damn primitive and restricted. i can't understand how
anyone could possibly think it's any good or that it makes a decent
desktop environment. it's almost unusable in its awful crappiness. i'm
glad i only have to use it as a games launcher.


interestingly, i don't even have to use my KVM to switch to the windows
box any more if i don't want to - steam's in-home streaming feature
means that in addition to running native linux games on steam, i can
"stream" windows-only games from my win7 box. that means it executes on
windows but displays on my linux box in either full-screen or windowed
mode. would probably suck on wireless but there's no noticable lag or
performance loss on a wired gigabit network.  i'm kind of amazed at how
well it works.

they built this feature to support their steamlink product, a box with
an HDMI port running linux to plug into a TV in the lounge room and
stream games over the LAN from Windows, Mac, and Linux machines running
steam.


>  * My computer only has one disk bay.

(you don't really need a disk bay for an SSD. just a sata port plus data
and power cables. and if you wanna get fancy you can sticky-tape it to
the side of the case :)

> 
> That is, if you have a commodity server,
> you're much better off buying the HDD and SSD components separately.
> 
> (Am I wrong?)

a good fast, large SSD will be better.  more expensive too.

but given that the SSHDs i bought were roughly the same price as
non-SSHDs, i don't see any harm and some potential benefit in using them
on linux.  8GB cache isn't much but it "just works" without any hassle
or configuration. it's giving me some SSD caching on my ZFS 'backup'
pool without having to add another SSD to the same andr dedicate an SSD
partition to the task.

BTW, the Transcend SSD370s are pretty good value at the moment.  I
bought a 256GB for $125.  512GB is $261.  Specs are surprisingly good
for the price...not as good as, say, the Samsung 850 Pro, but they cost
$188 for 256GB and $333 for 512GB.

but if you're not in a hurry to get an SSD, rumour has it that there
will be significant increases in capacity AND reductions in price on
SSDs this year.

also BTW, this article on slashdot from a few days ago was interesting:

http://hardware.slashdot.org/story/16/01/10/068211/ocz-revodrive-400-nvme-ssd-unveiled-with-nearly-27gbsec-tested-throughput

no price was mentioned, but it's an M.2 (X4 PCI-e variant) SSD getting
about 2.7Gb/s reads and 1.6Gb/s writes. plugs into an M.2 slot on a
motherboard or PCI-e adaptor with one or two M.2 slots are reasonably
cheap.  available in sizes from 128GB to 1TB.

will probably turn out to be unreasonably expensive, but should serve to
push down the price of lesser SSDs. and it's a sign of things to come
even for budget-conscious geeks like me over the next few years.

all i want is a pair or two of 5 or 10TB SSDs approximating those speeds
for about $100-$200/drive. that's not too much to ask, is it? that'd make a
nice ZFS pool. at a guess, gear like that may be only 3 years away.

craig

-- 
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: SSHD

2016-01-12 Thread Craig Sanders via luv-main
On Wed, Jan 13, 2016 at 09:04:42AM +1100, Craig Sanders wrote:
> I'm not using any ZIL or SSD L2ARC on the backup pool so having the
> 4GB SSD cache (per drive) on it is probably beneficial.

my mistake.  it's actually 8GB not 4GB.

craig

-- 
craig sanders 

BOFH excuse #416:

We're out of slots on the server
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: SSHD

2016-01-12 Thread Craig Sanders via luv-main
On Tue, Jan 12, 2016 at 05:18:23PM +1100, Tennessee Leeuwenburg wrote:
> Based on my general new article reading, I thought everything above
> 6GB was using shingled storage which doesn't interest me so much. I'd
> be interested if you find out otherwise.

dunno about all drives 6TB and above, but definitely the Seagate Archive
drives (MSY have 6 & 8TB models, and IIRC Seagate recently-ish released
a 10TB model).


I've got a set of 4 x 4TB Seagate SSHDs, have no complaints about either
quality or performance.  I've only got them because I had one (which i
was intending to use in my win7 steam games box) when I urgently needed
to upgrade my ZFS 'backup' pool, so i bought another 3 and set them up 
as 2 mirrored pairs (for a total of 8TB).

I'm not using any ZIL or SSD L2ARC on the backup pool so having the 4GB
SSD cache (per drive) on it is probably beneficial.  I haven't done
any performance testing (the urgency of the upgrade - the backup pool
had hit 90+% utilisation which is around where ZFS peformance goes to
absolute shit - precluded running bonnie++ before use) but they seem
reasonably fast to me, for magnetic spinning disks. they're certainly
no worse than non-SSHD disks and cost about the same - a little cheaper
than a Seagate NAS or WD Red.


$ zpool list -v backup
NAME   SIZE  ALLOC   FREE  EXPANDSZ   FRAGCAP  DEDUP  HEALTH  ALTROOT
backup  7.25T  2.54T  4.71T -20%35%  1.00x  ONLINE  -
  mirror  3.62T  1.27T  2.35T -20%35%
sdb  -  -  - -  -  -
sdi  -  -  - -  -  -
  mirror  3.62T  1.27T  2.35T -20%35%
sdd  -  -  - -  -  -
sdc  -  -  - -  -  -

$ list_disks | grep sd[bicd]
sdb ST4000DX001-1CE168_Z303PTHA
sdc ST4000DX001-1CE168_Z302ZSGB
sdd ST4000DX001-1CE168_Z303PVZ9
sdi ST4000DX001-1CE168_Z303PSH6

craig

ps: if anyone's wondering what 'list_disks' is, it's an alias of mine:

(line breaks and indentation added for readability)

alias list_disks='find /dev/disk/by-id/ -type l | 
  awk '\''/\/(s?ata|usb|scsi)/ && ! /-part[0-9]/'\'' | 
  while read disk ; do
 echo $(basename $(readlink $disk)) $(basename $disk);
  done | 
  sort'

can be saved as a script instead of an alias, just change the '\''
(which is how you quote single-quotes inside other single-quotes) to
plain '

i should probably simplify that one day and do the basename and readlink
stuff in awk rather than in a while read loop. may as well do it now:

alias list_disks2='find /dev/disk/by-id/ -type l | 
   awk '\''@load "filefuncs" ; 
  /\/(s?ata|usb|scsi)/ && ! /-part[0-9]/ {
   stat($1,sd) ; 
   gsub(/^.*\//,"",sd["linkval"]) ;
   gsub(/^.*\//,"",sd["name"]) ;
   print sd["linkval"], sd["name"]
   }'\'' | 
   sort'

this will only work with GNU awk, won't work in other awks because it
needs the filefuncs extension for stat(), and only gawk has extensions.

the downside is that it's not actually any simpler. certainly no easier
to read and understand. it is much more efficient because it avoids
executing external programs 'basename' and 'readlink' repeatedly, but
that's not really an issue on modern systems. my preference is to
optimise for readability (so i know WTF i was thinking in 6 months time)
rather than performance for shell/awk/etc scripts.


-- 
craig sanders 

BOFH excuse #247:

Due to Federal Budget problems we have been forced to cut back on the number of 
users able to access the system at one time. (namely none allowed)
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: Best Database For Storing Images

2016-06-14 Thread Craig Sanders via luv-main
On Mon, Jun 13, 2016 at 06:30:46PM +1000, Andrew McN wrote:
> On 13/06/16 17:21, James Harper wrote:
> > https://wiki.postgresql.org/wiki/BinaryFilesInDB

> I find that article unconvincing, but my concerns are mostly around
> performance.  If performance is of no concern at all, then the
> atomicity/maintenance argument might be significant (though not the
> stuff about storing as a text data type).

I'm not convinced either.  Same as I'm never convinced by the recurring
idea of using an SQL database as a mail-storage backend.

IMO the best option is to use a database to store information about the
image (path and filename or URI, size, width, height, geo-location,
description, and whatever other details are required.  Maybe even a
small thumbnail image), whilst storing the actual image in either:

 - a filesystem, with a hashed directory structure to avoid having too
   many files in one directory. e.g.

   .../images/{00-ff}/{00-ff}/{00-ff}/imagefile.png

   If you're worried about the image pathname getting out of sync with
   the database, you could write your own FUSE fs layered on top of the
   actual fs to automatically update the database if a file is renamed
   or moved.

   BTW, there are FUSE modules for perl and python to make this easier
   if you don't want to use C.


 - An object store like Amazon S3, Openstack's swift, or ceph.

   BTW, there are existing FUSE modules for these that could be modified
   to update a database if an object is changed.


For a very large number of images, an object store is the best
option...you get hugely scalable performance, redundancy, and storage
capacity.

craig

-- 
craig sanders 

BOFH excuse #302:

microelectronic Riemannian curved-space fault in write-only file system
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Inconsistent list behaviour

2016-01-15 Thread Craig Sanders via luv-main
On Fri, Jan 15, 2016 at 07:34:19PM +1100, Tony Langdon wrote:
> On 15/01/2016 11:52 AM, Craig Sanders via luv-main wrote:

> > BTW, i have .procmailrc rules to get rid of dupesso if get CC-ed on
> > a list reply, i see whichever one arrives first.
> 
> Sounds like that would lead to more inconsistencies.

well, no. it's exactly what's happening in your situation: gmail is
doing its own dupe-detection (presumably based on the Message-Id, just
like my procmail rule).  You're getting one or the other copy of the
email (list copy or the directly-sent copy) first and that's the one
you're seeing.  The list has no control over that, and can not have any
control or influence over that.

BTW, given the extra processing and re-sending time required for the
list copy, you're probably mostly seeing direct CCs first.


craig

-- 
craig sanders <c...@taz.net.au>
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: Inconsistent list behaviour

2016-01-15 Thread Craig Sanders via luv-main
On Fri, Jan 15, 2016 at 07:41:44PM +1100, Tony Langdon wrote:
> I use IMAP 99% of the time, time will tell, now that I've changed my
> Mailman options.

here's a test message for you.  sent to the list and CC-ed directly to you.

craig

-- 
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: IPv6

2016-01-15 Thread Craig Sanders via luv-main
On Sat, Jan 16, 2016 at 12:31:47PM +1100, Russell Coker wrote:
> On Sat, 16 Jan 2016 08:36:30 AM Craig Sanders via luv-main wrote:
> Why are we having an argument about comments then?  If they are just
> comments then it shouldn't be a big deal.

because munging them screws up an MUAs ability to reply correctly.

because overwriting the Reply-To: header can make it impossible to reply
to the original sender.

because what mailman is doing with DMARC is broken.

> Stupidity is a problem.  But what we want to do here is to make it possible 
> for people who aren't particularly stupid to work this out without much 
> effort.

they can. It's not hard:  

1. Don't click on links in email.  

2. Don't trust that the message actually came from whoever it claims to
have come from unless it's signed and/or encrypted with a key you (your
system) knows, and even then be wary (keys can't protect you when the
sender is coerced to give up their private key)

that's it.  the rest is implementation details of those two things.

> They could have designed encryption and signing features from the start and 
> methods for recognising new senders.

those things are MUA functions (i.e. message payload), not MTA - i.e.
irrelevant to the SMTP protocol.


> > > Apart from the ones who receive mail viw Gmail, the ones who
> > > complained about my mail going to their spam folders which started me
> > > working on this.
> > 
> > if mailman is breaking DKIM-signed message then that needs to be fixed.
> > mangling headers is a crappy workaround hack, not a fix.
> 
> Fine,  Tell us how to fix it without mangling headers.

why me? i'm not the one who wants to change anything. you've changed
things and caused breakage.

a better idea would be for you to fix it instead of deciding that mangling
headers is a reasonable thing to do.

i.e. you want it, you fix it.

also:

a) i'm not a python programmer

b) i disagree with the concept of DMARC, so i have no desire to implement it or
fix a particularly broken implementation of it.


> > as i said, solve the right (actual!) problem. if mailman's handling of
> > DKIM-signed messages is broken then THAT is the problem that needs to be
> > fixed.
> 
> OK.  Let's fix that.  I don't have the time or skill to fix Mailman code, 
> could 
> you please do it for me?

translation: i've enabled a broken feature and rather than revert that change, 
i'll 
badger someone else to fix the problem for me.  you enabled it, it's up to you 
to
either fix it or disable it again.

craig

-- 
craig sanders <c...@taz.net.au>
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: Inconsistent list behaviour

2016-01-16 Thread Craig Sanders via luv-main
On Sat, Jan 16, 2016 at 07:15:20PM +1100, Tony Langdon wrote:

> > well, no. it's exactly what's happening in your situation: gmail is
> > doing its own dupe-detection (presumably based on the Message-Id,
>
> Which is leading to inconsistent behaviour.  Gmail is filtering the
> messages into the list, but only passing the direct copy on.  Net
> result: inconsistent list behaviour, when seen from my end, because
> the list is now a mixture of list and direct messages. :/

yes, but it's not a problem that the list or the list-admin can do
anything about - it's completely outside of their control or influence.

you can add a filter rule to gmail so that if the msg is To: or CC:
luv-main@luv.asn.au then it gets saved in the luv-main folder.  That
will get the direct messages that don't have the List-* headers.


about the only other thing you can do is have a .signature or similar
asking people NOT to send you CCs of list mail. some people may even
read it and follow your wishes.

craig

-- 
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: IPv6

2016-01-16 Thread Craig Sanders via luv-main
On Sat, Jan 16, 2016 at 05:08:23PM +1100, Russell Coker wrote:
> On Sat, 16 Jan 2016 05:00:58 PM Craig Sanders via luv-main wrote:
> If you believed that there was nothing you could do then you wouldn't
> have spent months arguing.

i haven't. i've mostly ignored it for most of the last few months since
you broke the list and only got sucked into it in the last few days
because the thread's been going on for about a week.

and my procmail rule partially unbreaks the list (except for the munged
 Reply-To: header which 1. is unfixable and 2. isn't used much by
subscribers here AFAIK)


> OK, please go tell Yahoo etc that they are doing it wrong.  Let's
> leave the list server in it's current configuration until you convince
> Yahoo and Gmail of the error of their ways.

if you intend to do nothing, then just say so - don't make up some
bullshit impossible goal that "might" get you to unbreak the list.


> Actually the Gmail users didn't do anything, they just signed up for
> a mail service knowing nothing about DKIM or the Gmail actions that
> would happen when they received DKIM signed mail via a list.

it's really difficult to have much sympathy for the technical problems
experienced by people who chose to use a spyware service run by a giant
corporation.  They CHOSE to have no control over their mail, they get to
just suck it up and accept whatever problems that causes.  They don't
get to assume that their abdication of responsibility for their own
email automatically entitle them to cause problems for others.


> If they were so wrong then Gmail wouldn't implement DKIM/DMARC checks
> on mail it receives.

google has their own reasons for destroying independently run mailing
lists.

> > the mailman devs have obviously got it wrong. even they admit that
> > their implementation is buggy and broken.
>
> What is buggy is the fact that they can't preserve headers and body
> encoding.

there's also the fact that mailman strips attachments and makes other
changes to the message body which is going to invalidate any signing of
the original msg.

Look, the major problem with DMARC is that it uses the From: header.

If it used the Sender: header instead (or used it IF it exists in
the headers), then mailman could do the right thing by stripping
attachments, changing the encoding, or whatever and declare that the
list was the Sender: and DKIM sign it.

> > you're assuming that there is such a way.  IMO DMARC is
> > broken-by-design so it's impossible to do it in any good or even
> > reasonable way.
> >
> > in other words: the way i'd like is to not do it at all. i've said
> > that repeatedly.
>
> It's OK to wish that Yahoo, Gmail, Hotmail, Facebook, and other big
> companies would do things differently.  But your wishes aren't going
> to change anything.  Let's stick to discussing the reality of how to
> deal with mail to/from such services.

i didn't wish they did things differently, at least that's not the
argument i'm making here. my attitude is that their unreasonable demands
about changing the nature of email and mailing lists should be ignored,
not surrendered/pandered to.

craig

-- 
craig sanders <c...@taz.net.au>
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: IPv6

2016-01-16 Thread Craig Sanders via luv-main
On Sat, Jan 16, 2016 at 09:32:14PM +1100, Russell Coker wrote:
> > > Actually the Gmail users didn't do anything, they just signed up for
> > > a mail service knowing nothing about DKIM or the Gmail actions that
> > > would happen when they received DKIM signed mail via a list.
> > 
> > it's really difficult to have much sympathy for the technical problems
> > experienced by people who chose to use a spyware service run by a giant
> > corporation.  They CHOSE to have no control over their mail, they get to
> > just suck it up and accept whatever problems that causes.  They don't
> > get to assume that their abdication of responsibility for their own
> > email automatically entitle them to cause problems for others.
> 
> So this is really a debate about who's the most elite under the guise of 
> discussing list headers?

WTF has "elite" got to do with it?

if you choose to use a crappy service, you get a crappy service. same as
if you choose to eat dogfood out of a can or, worse, McDonalds, you get
to eat crappy pseudo-food.

choosing crap means you get crap. that's just one of the unalterable
facts about how the universe works. you can't choose crap things and
magically get good things.


> > Look, the major problem with DMARC is that it uses the From: header.
> > 
> > If it used the Sender: header instead (or used it IF it exists in
> > the headers), then mailman could do the right thing by stripping
> > attachments, changing the encoding, or whatever and declare that the
> > list was the Sender: and DKIM sign it.
> 
> Well that wasn't what DMARC was designed to do.

DMARC's design is broken.

> We have to either make the list work with mail to/from such services or 
> exclude them from the lists.

so?

how is that different to any other site or individual who deliberately
(or incompetently) breaks the expected standards? are google and
facebook and yahoo etc allowed to redefine standards just by breaking
them and demanding that everyone else follows their new way? who else is
allowed to do that? do you have to be a mega-corporation with billions
of dollars or is that priviledge available to us peasants too?


craig

-- 
craig sanders 

BOFH excuse #320:

You've been infected by the Telescoping Hubble virus.
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: Inconsistent list behaviour

2016-01-14 Thread Craig Sanders via luv-main
On Fri, Jan 15, 2016 at 09:50:22AM +1100, Tony Langdon wrote:
> Foe some reason, I'm only receiving one copy.  Maybe a Gmail oddity?  Or

probably gmail.

BTW, i have .procmailrc rules to get rid of dupesso if get CC-ed on
a list reply, i see whichever one arrives first.

This one deletes dupes:

# delete duplicate messages
:0 Wh: msgid.lock
| formail -D 8192 msgid.cache


These two rules store them in ~/mail/Misc/duplicates.

# store duplicate messages in Misc/duplicates
:0 Whc: msgid.lock
| formail -D 16384 Misc/msgid.cache

:0 a:
Misc/duplicates


> I prefer that if you want to reply to me direct, do so.  If you want
> to reply to the list, do so.  Doing both does send a mixed message. ;)

Different people have different preferences when it comes to getting
dupe replies on list mail. some like it, some hate it, some don't care.

I try to remember individual preferences as best i can, but mostly just
default to sending either private or list mail, trimming the CC list.

I always do a 'G' group reply in mutt and edit the To: and CC: headers
manually in vim, because I don't always know until I'm ready to send the
message whether it should be public or private.

One of the reasons I hate From and Reply-To munging is that it takes
that option away from me, or makes it more of a PITA than it was, or
than it needs to be.

craig

-- 
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: SPF/DKIM + DMARC (Was: IPv6)

2016-01-14 Thread Craig Sanders via luv-main
On Fri, Jan 15, 2016 at 05:03:39PM +1100, Craig Sanders wrote:
> being able to run mailing lists is an essential part of the open
> internet and *IS* de-centralised.  At least until the corporates
> manage to kill off any alternatives to their spyware services via
> DKIM.

sorry. i've typed DKIM here and in other places, but meant DMARC.

craig

-- 
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: IPv6

2016-01-14 Thread Craig Sanders via luv-main
On Thu, Jan 14, 2016 at 08:56:52PM -0500, Jason White wrote:
> Russell Coker via luv-main  wrote:
> > On Fri, 15 Jan 2016 09:52:30 AM Tony Langdon via luv-main wrote:
> > > > Facebook is now compelling sysadmins to use SPF or DKIM.  This isn't
> > > > going to go away.  It's only a matter of time before Internode starts
> > > > using DKIM to placate Facebook.
> > > 
> > > Looks like it's a case of having to follow suit., like it or not.

well, SPF is fine.  no problem with that.

DKIM is an ill-conceived abomination.  It actually cares about the
*headers* in a message rather than the **envelope*.  To an MTA, headers
are irrelevant, they're just commentswhat matters is the envelope
sender address and the envelope recipient address.

And worse, DKIM cares about the From: header rather than the Sender:
header.

This is just broken in every possible way.

> > Yes.  I'd appreciate it if people would stop acting like I'm doing
> > something I want to do here.  I just want mail to go through
> > reliably and I'm doing what is necessary to achieve that goal.

unfortunately, you stil haven't impleemnted the minimum-damage option
that only munges posts that are sent by users from domains that
implement DKIM (like google or yahoo), and leaves other mail alone.

it's not like anyone ever posts from those domains to our lists anyway.

which is one of the more annoying things about this issue - the
configuration messes things up for active participants on the list, and
it doesn't even provide any benefit to the lurkers who never say or
contribute anything.

That's entirely the wrong thing to do. those who contribute may well
stop bothering if they get annoyed enough, and non-contributors won't
step forward to replace them...if they were inclined to, they'd already
be posting.

driving away those who write the posts (that both they and the lurkers
read) is self-defeating.

> Widespread use of DMARC will result in changes to well established
> conventions.

IMO it's an attempt by major corporate players to completely take over
email so that no email is ever sent that they don't get a copy of to
examine and index and use to build up profiles on individuals.

and to sell to the NSA etc of course.

Message forgery is a solved problem.  SPF works.  DKIM is a) overkill
and b) unnecesary.

If individual senders need more identity verification than SPF, there
are numerous encryption and signing options availablewith support
built in to many MUAs.



> I don't personally object to having the list server rewrite the "From"
> field and add a "Reply-to" header that designates the original sender;
> but some people have needs which differ from mine, and for them it can
> be an inconvenience.

NO!

those who refuse to learn from history are doomed to repeat the same
damned stupid mistakes.  This issue was settled definitively in the 90s.

Mailing lists should *never*, under any circumstances, mess with the
Reply-To header.  That belongs solely to the original sender.

Lists have several alternatives they can use, including Mail-Followup-To:
and List-Post:

and Lists shouldn't mess with the From: header, either.  No matter what
corporate vermin demand.  WGAF what facebook wants?  how many emails
from luv lists ever go to facebook?

craig

-- 
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: SPF/DKIM + DMARC (Was: IPv6)

2016-01-14 Thread Craig Sanders via luv-main
On Fri, Jan 15, 2016 at 04:30:47AM +, Russell Coker wrote:
> On Fri, 15 Jan 2016 04:01:28 AM Joel W. Shea via luv-main wrote:
> > Yes, it also has the potential to reduce the distributed and
> > decentralised nature of email;
> 
> Not at all.  The distributed and decentralised part of email is
> inherently not mailing lists.  By definition lists are centralised!

yeah. and sucks to be us if we want to run our own lists and not google
or yahoo or msn shite.

being able to run mailing lists is an essential part of the open internet
and *IS* de-centralised.  At least until the corporates manage to kill off
any alternatives to their spyware services via DKIM.

> The real issue is that nothing we agree on matters much if Google and
> Yahoo don't agree.

To the contrary, nothing that google or yahoo demand matters if we just
ignore them.

Neither of them are necessary to the function of our list. If they want
to do a dis-service to their users by rejecting mail from legitimate
lists, that's a problem for them and their users, not a problem for us.

if mail from the list sent to gmail etc bounces because of their
misconfiguration, the correct response is to think "shit happens.
this is nothing new, there's always been idiot mail admins at large
corporations" and ignore it, not jump through their hoops that are
designed to let them gain complete control over all email on the net.

craig

-- 
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: Mail Server Really Slow

2016-01-18 Thread Craig Sanders via luv-main
On Tue, Jan 19, 2016 at 10:12:12AM +1000, Piers Rowan wrote:
> - dovecot
> - apache (roundcube webmail)
> - sendmail

unless you're a sendmail expert with a decade or two of experience
working with it, you might want to think about switching to postfix.

> - clamav-milter
> - amavisd-milter
> 
> RAM: 4.10 GB
> CPU: QEMU Virtual CPU version (cpu64-rhel6), 5 cores

CPU allocation of 5 cores is probably overkill for mail, unless you're
processing a LOT of incoming mail with clamav and spamassassin.

how many users do you have, and how much mail are you receiving and
processing (msgs/day and megabytes/day)?

btw, I used to run mail servers at ISPs for thousands of users on 1990s
and 2000s hardware, which was nowhere near as fast (slower disks, slower
CPU, *much* less RAM) as what you can get today for a fraction of the
price. if your mail VM can't even handle a few dozen or few hundred
users, there's something seriously wrongprobably disk I/O contention
from other services running on the same machine and disk.


> The server is a VM on a host server that also provides http / mysql
> services. 

mail is VERY dependent on disk I/O - especially if you have multiple users
reading their mail via POP/IMAP (or via webmail such as roundcube, which
connects via imap).  and it only gets worse if you have other processes
fighting dovecot for disk access.

If at all possible, you should consider having a dedicated mail server, or
at least dedicated drive(s) for mail, that doesn't have to share disk
I/O with anything else - and certainly not with other I/O heavy services
like a web server or mysql.

If your server is located in-house (i.e. not in a co-lo facility),
you may also want to consider adding a fast SSD (or two in RAID-1
configuration for safety AND roughly double the read performance) just
for the mail spool (and make sure dovecot etc are configured NOT to move
mail to user home directories, but to leave them on the SSD).

You can get Samsung 850 Pro 128GB for $116.  $188 for 256GB. if your
total mail size is < 128GB and unlikely to grow that large in the
forseeable future, you're better off with a pair of 128GB drives in
RAID-1 than a single 256GB drive.

if it's a VM at a co-lo facility, talk to them about getting a host with
at least one SSD so you can move your mail spool to that.

the random-access nature of SSDs (i.e. they don't have to waste time
moving the disk head around to access data) mitigates many of the speed
problems caused by having multiple users read large mailboxes at the
same time. spinning hard disks will get around 100 IOPS at best. a good
SSD will get anywhere up to 100,000 IOPS depending on how it's being
used (but you can expect a minimum of 10,000)


> The host server runs cron jobs to poll the email server (importing
> data from mail boxes into the CRM) so - to clutch at straws - 

so you're regularly importing mail into a database of some sort?

that may be the source of your problem - dovecot will be contending
with mysql for disk I/O unless the mysql db and the mail spool are on
different disks (ideally, two separate SSDs. or two separate RAID-1
devices on SSD)

how big is the database?  if it's huge, can you archive older stuff to
another server, or do you need instant access to the old data?

btw it's worthwhile doing some research on database
tuning. here's a useful Q on tuning mysql for SSDs
http://dba.stackexchange.com/questions/59828/ssd-vs-hdd-for-databases


> I am not sure if the host and guest are competing for the disk IO at
> the same time with these calls. Contrary to that is that the host
> server does not experience any slow downs.

they almost certainly are, from what you've said about what the server
is doing.

as russell suggested, run iostat - and run it on the host, not on the VM.

> Before the holidays we added another 30 users to the servers.

30 users is not a lot, and is unlikely to have made much difference
unless they're extraordinarily heavy users of mail, several orders of
magnitudemuch more so than all your previous users.

i could see adding a few thousand or even a few hundred extra users
making a significant performance impact, but not just a few dozen.

craig

-- 
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: Mail Server Really Slow

2016-01-18 Thread Craig Sanders via luv-main
On Tue, Jan 19, 2016 at 12:03:24PM +1000, Piers Rowan wrote:

> sda = a usb backup drive
> dm-0 = / (MySQL) RAID
> dm-1 = / (MySQL) RAID
> 
> dm-2-5 = /home (also where VM's Live) [1 x hot spare]

1. what kind of disks are these?  

can you run:

find /dev/disk/by-id/ -iname ata-* -o -iname usb-* |
  grep -v -- -part |
  while read disk ; do 
  echo $(basename $(readlink $disk)) $(basename $disk)
  done | \
  sed -re "s/(usb|ata)-// ; s/(SATA|Generic)_//" |
  sort

this will show the brand and model of all disks in the system like this:

$ list_disks
sda WDC_WD10EACS-00ZJB0_WD-WCASJ2114122
sdb ST4000DX001-1CE168_Z303PTHA
sdc ST4000DX001-1CE168_Z302ZSGB
sdd ST4000DX001-1CE168_Z303PVZ9
sde WDC_WD10EACS-00ZJB0_WD-WCASJ2195141
sdf WDC_WD10EARS-00Y5B1_WD-WMAV50933036
sdg ST31000528AS_9VP18CCV
sdh OCZ-VECTOR_OCZ-0974C023I4P2G1B8
sdi ST4000DX001-1CE168_Z303PSH6
sdj OCZ-VECTOR_OCZ-8RL5XW08536INH7R


2. is /home RAID-5? i'm guessing it is since RAID-10 with 3 drives and
a hot-spare doesn't make any sense.  RAID-5 can be dreadfully slow,
especially on random writes.

what kind of virtual disks are you using for the VMś? cow2 image files?
raw or lvm partitions?  partitions are much faster than qcow2 files.

3. is /home where users' mail lives?

or does dovecot move it ~user/Mail or similar from /var/mail?

Do the users use their /home directories for anything else?  samba or
nfs exports for example?  would you class that as light or heavy usage?



> iostat -x 10
> Linux 2.6.32-573.7.1.el6.x86_64 19/01/16 _x86_64_(12 CPU)
> 
> avg-cpu:  %user   %nice %system %iowait  %steal   %idle
>5.270.001.743.980.00   89.01
> 
> Device: rrqm/s   wrqm/s r/s w/s   rsec/s   wsec/s avgrq-sz
> avgqu-sz   await  svctm  %util
> sda 144.40   398.34   92.54   95.71  5790.73  4493.15 54.63
> 0.321.71   1.38  25.92

so you're running a backup at the moment?  is that when the slowdown occurs, or
does it happen any old time?

what kind of backup software? rsync?


> dm-0  0.00 0.00  154.36   81.59  1234.90 652.68 8.00
> 0.040.16   0.44  10.43
> dm-1  0.00 0.00  154.36   81.59  1234.90 652.68 8.00
> 0.040.17   0.44  10.44

mysql drives seem fine.


> dm-2  0.00 0.000.000.00 0.00 0.00 8.00
> 0.003.36   3.07   0.00
> dm-3  0.00 0.000.000.00 0.00 0.00 8.00
> 0.000.00   0.00   0.00
> dm-4  0.00 0.009.638.2477.01 65.93 8.00
> 0.22   12.41   1.52   2.72
> dm-5  0.00 0.00   73.20  404.46  4478.78  3774.52 17.28
> 0.240.27   0.46  21.75

it's odd that most of the I/O is on just one of these /home drives.

craig

-- 
craig sanders 

BOFH excuse #94:

Internet outage
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: Mail Server Really Slow

2016-01-18 Thread Craig Sanders via luv-main
On Tue, Jan 19, 2016 at 12:59:09PM +1100, Russell Coker wrote:
> On Tue, 19 Jan 2016 12:22:31 PM Craig Sanders via luv-main wrote:
> > On Tue, Jan 19, 2016 at 10:12:12AM +1000, Piers Rowan wrote:
> > > - dovecot
> > > - apache (roundcube webmail)
> > > - sendmail
> > 
> > unless you're a sendmail expert with a decade or two of experience
> > working with it, you might want to think about switching to postfix.
> 
> While Sendmail is generally a bad choice, it seems unlikely that it is 
> contributing to the disk IO performance here.

probably not. but there's no good reason to use it these days unless you
know it extremely well. there are better options that are easier and
saner to configure (postfix and exim for starters). postfix's an almost
drop-in replacement for sendmail, so it's an easy conversion - IIRC some
of the map tables (like the virtual table ) have a slighly different
format.

OTOH, last time i used sendmail (admittedly over 10 years ago) it
suffered from a massive thundering herd problemtoo much inbound *or*
outbound mail or both at once and the system would slow to a crawl and
eventually crash (due mostly to running out of memory) because it tried
to send/receive/process it all at once. no amount of tuning would help.

postfix's queue management was vastly superiorjust switching to
postfix on the exact same server (IIRC, a pentium pro with something
like 64MB or 256MB running a mailing list with a few hundred thousand
subscribers - so a lot of mail to send out, and a lot of incoming
bounces) caused it to stop crashing under the load and deliver all the
mail in about 1/10th of the time.


> One thing that can be done to improve performance on the MTA side is
> to use Dovecot's delivery agent instead of having the MTA deliver
> directly or use Procmail or similar.  If Dovecot delivers directly it
> will index the mail while delivering it which will save tim later.

yep.


> > so you're regularly importing mail into a database of some sort?
> > 
> > that may be the source of your problem - dovecot will be contending
> > with mysql for disk I/O unless the mysql db and the mail spool are on
> > different disks (ideally, two separate SSDs. or two separate RAID-1
> > devices on SSD)
> 
> I find it difficult to imagin a mail service of that nature which needs 
> performance that is greater than a pair of SSDs in a RAID-1 can provide.

i meant one SSD or RAID-1 pair for mail, and another for mysql.

they're the two big disk I/O hogs, so moving them onto separate disks
is going to be a huge win for performance.  RAID-1 is a bonus, but
just having them on separate SSDs would help enormously.

and with only one disk for each, you can create them as degraded RAID-1
(i.e. RAID-1 without a mirror disk - Linux mdadm supports this, without
any problem) so it's easy to add a 2nd drive to each later if buying
four drives at once stretches the budget too far.


> I think we're talking about a mail server for a small company not hotmail.

that's my impression too.

craig

-- 
craig sanders <c...@taz.net.au>
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: Mail Server Really Slow

2016-01-18 Thread Craig Sanders via luv-main
On Tue, Jan 19, 2016 at 02:24:01PM +1100, Russell Coker wrote:
> Why do you use LVM inside a virtual machine?  

my guess is that it's the default for the RH installer to use lvm.

ditto for centos and fedora.

> That offers no real benefit and makes things more difficult to debug
> things as it will be a pain if the VM doesn't boot properly and you
> need to fix that LV from the Dom0.

the best way to use LVM for VMs is to create an LVM volume on the host
machine and tell the VM to use that as its disk.  

The VM doesn't need to know or care what the underlying disk is, and it
certainly shouldn't be running LVM on top of whatever the host gives it.

the only time that makes any sense is when you're using a VM to
experiment with LVM (or ZFS or btrfs or whatever)i.e. testing and
research, not production use.

craig

-- 
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: SSHD (and steam and windows and other stuff)

2016-01-14 Thread Craig Sanders via luv-main
On Fri, Jan 15, 2016 at 12:54:42AM +1100, Tim Connors wrote:
> On Thu, 14 Jan 2016, Trent W. Buck via luv-main wrote:
> 
> > $ msy | foldr grep -Fi -- 2tb 3.5 sata3 7200
> 
> msy?  That looks nifty, given how horrible browsing the pictures on their
> webshite is.
> 
> details?

try my msygrep script.

http://taz.net.au/~cas/msytools/

> But even if it did fall back, it would become unusable.  200mb write
> stripes.  Gives you about 1 IOP when writing random seeks.

the 200MB stripes are on the Archive drives, not the SSHDs.  The SSHDs
are normal drives with an 8GB flash cache in addition to the usual 64MB
RAM cache.

craig

-- 
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: pxe server

2016-02-08 Thread Craig Sanders via luv-main
On Sat, Feb 06, 2016 at 12:36:02AM +, James Harper wrote:
> Any suggestions?

you seem to have solved your original question, but my suggestion is to
serve gpxelinux.0 or ipxe to the client, then you can use http rather
than tftp to transfer kernel+initrd or boot/rescue image or whatever.

http is faster and IMO more reliable.

and both will give you access to various extras like menus.

BTW, an alias (and Allow/Deny access control directives) in your web
server to serve, e.g., /var/tftp/ as /tftp/ is useful.

craig

-- 
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Frozen Debian testing upgrade

2016-02-12 Thread Craig Sanders via luv-main
On Fri, Feb 12, 2016 at 08:47:11PM +, stripes theotoky wrote:

> Configuring libc6
> Kernel version not supported
>  need to be restarted
> This version of the GNU libc requires kernel version 3.2 or later.
> Older versions might work but are not officially supported. Please consider
> upgrading your kernel.
> 
> Ok
> 
> Click on OK as part of the upgrade is to bring in a newer kernel anyway and
> [... problems ... ]


I would suggest manually installing a 3.2 or later kernel (with 'apt-get
install linux-image-xx') and rebooting **before** upgrading libc6 or
anything else.

if the updated libc6 has a hard requirement on kernel >= 3.2 then
upgrading libc6 before rebooting to the new kernel is bound to lead to
problems...quite possibly very significant and difficult-to-fix problems
as pretty much everything (including apt and dpkg and bash/dash and
coreutils and more) depend on libc6.

craig

-- 
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: SSL configuration

2016-02-01 Thread Craig Sanders via luv-main
On Sun, Jan 31, 2016 at 04:11:44AM +1100, Andrew McGlashan wrote:
> All good and fair comments, but anyone whom lets people continue to use
> IE and/or Windows XP. well.  

yes, it's simply intolerable. they shoud be rounded up and sent to
re-education camps. apply the electrodes until they learn the error of
their ways, then send them home gibbering with an ubuntu install disk.

btw, that should be a "who" not a "whom".

> They WILL have to change sooner or later and the sooner the better.

many are still "happily" using XP (i.e. they don't know any better) -
as the recent virus fiasco at RMH shows. workstations across the entire
hospital, from pharmacy to the wards taken out by really ancient XP
viruses.  I know, i've been stuck in here for much of it. some sections
(fortunately, the transplant clinic was one) had upgraded to win7 but
many were still running XP.

in fact, the health workers don't CARE what they use, it's not their job
to care, or know - they just need access to their patients' records and
blood test results and x-rays etc while they've got the patient in the
room, or during reviews with other doctors.  They've got more important
things to do than worry about the trivial details of the computers
they're using.


this will probably be the wake-up call that hospital management needed,
but many people and organisations just ignore computer security because
upgrading hundreds or thousands of desktop machines is time-consuming
and expensive.

simply telling them "Use Linux" doesn't and won't work.  They can't buy
it off the shelf, it doesn't come with a recognisabe brand-name, and
they just don't have the time or the inclination to find out what it's
about or why it might save them from future virus problems.

and from the bean-counting management POV, IT is an expense, to be
minimisedso underfunded, and understaffed.

craig

-- 
craig sanders 

BOFH excuse #318:

Your EMAIL is now being delivered by the USPS.
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: Mail Server Really Slow

2016-01-19 Thread Craig Sanders via luv-main
On Tue, Jan 19, 2016 at 04:52:38PM +1000, Piers Rowan wrote:
> >1. what kind of disks are these?
>
> HP Hardware Array with default LVM on it. This was just to take live
> snapshots of MySQL to be able to restart replication without issue.

so, how are the HP Array controller and disks configured?

JBOD? one big RAID-5 arrayŝ a RAID-1 array for mysql and the rest as
RAID5 for the OS and /home etc?


if it's one big RAID-5 array, with LVM on top of that then there is no
separation of disk I/O between mysql and mail - they're all using the
same drives all the time. performance-wise, this is about the worst
possible way to set it upgood for maximising storage capacity, but
bad for disk I/O performance.


BTW, what kind of server is it?  brand/model. and specs (CPU, RÆM, etc).
I guess it's some model of HP server.

and what's the exact model of the HP Array Controller? 'lspci | grep HP'
or 'lspci | grep -i array' should show it.

HP provide a command-line tool for linux called hpacucli - if you
haven't installed it already, you should do so ASAP.  It will let you
query and configure your HP Array from linux.  You'll have to search on
the HP web site for download and install instructions - or maybe it came
on a CD with your server.

Rather than wade through the inevitably over-verbose official corporate
documentation (read that too, but as a reference later), you can find
quick guides on how to use its basic functions with google, e.g.:

http://koo.fi/blog/2008/06/08/hp-array-configuration-and-diagnostic-utilities-on-linux/
http://www.thegeekstuff.com/2014/07/hpacucli-examples/

BTW, hpacucli is also useful for server monitoring purposes - tools like
nagios have plugins to use it for array status monitoring and alerting.


if you're wondering why Russell and I are asking all these questions
it's because you haven't provided anywhere near enough details to answer
your performance question.



> > 2. is /home RAID-5? i'm guessing it is since RAID-10 with 3 drives
> > and a hot-spare doesn't make any sense.  RAID-5 can be dreadfully
> > slow, especially on random writes.
>
> RAID 5 + hot spare


> > what kind of virtual disks are you using for the VMś? cow2 image
> > files? raw or lvm partitions? partitions are much faster than qcow2
> > files.
>
> Not sure - 

you can check the VM's config to see if it uses a file
image (e.g. if using libvirt, maybe something like
/var/lib/libvert/images/mailserver.img) or an LVM partition (e.g.
/dev/mapper/vg-mail or /dev/dm-?)

what kind of virtualisation are you using?  KVM?  Xen?  with libvirt?

if you're using libvirt, you can use 'virsh dumpxml' and sed to
query the vm's disk config:

virsh dumpxml $vmname  | sed -n -e '/ can't break up the HP array to give it a dedicated disk now tho.  Too
> much risk of downtime.

do you have enough drive bays to add an SSD or pair of SSDs? if so, you
can begin to migrate from your current setup to an improved performance
setup like i described in my initial reply.

you can even do it without spare drive bays if you have a spare sata port
or two, just connect the drives to the ports and power and hang them
loosely in the case while you're transferring the files

the transfer can be done with rsync - a good method which minimises
downtime is to run rsync while the mail server is still running normally
to transfer the files to the new disk(s).

you can run this rsync operation repeatedly as often as you like to keep
it in sync (i would advise putting the exact rsync command with all its
options in a shell script so you can just run the script without having
to remember/retype the options every time).

when you're ready to cut over to the new setup, stop the mail daemons
and run a final rsync. then unmount and remount the new drives into
their correct location and start up the mail daemons again.

the bulk of the transfer is done while the service is still up and
running, and if you plan each step/command in advance (i.e. give
yourself a step-by-step "script" to follow on the command line),
downtime will be not much more than the minimum time needed to rsync any
changes between the last rsync run and the service being stopped.

and you can do the same later, with your mysql disks.


> >>iostat -x 10
> >>Linux 2.6.32-573.7.1.el6.x86_64 19/01/16 _x86_64_(12 CPU)

just noticed this - that's quite an old kernel, what version of Centos are
you running?

> >>avg-cpu:  %user   %nice %system %iowait  %steal   %idle
> >>5.270.001.743.980.00   89.01
> >>
> >>Device: rrqm/s   wrqm/s r/s w/s   rsec/s   wsec/s avgrq-sz
> >>avgqu-sz   await  svctm  %util
> >>sda 144.40   398.34   92.54   95.71  5790.73  4493.15 54.63
> >>0.321.71   1.38  25.92
>
> > so you're running a backup at the moment? is that when the slowdown
> > occurs, or does it happen any old time?
>
> No it was used for a project a couple of years ago and never unplugged

there's a lot of read and write activity 

Re: ata errors in dmesg/syslog - any pointers from the more ATA/AHCI literate?

2016-01-20 Thread Craig Sanders via luv-main
On Thu, Jan 21, 2016 at 04:34:39AM +, Anthony wrote:
> Would I be right in thinking that this kind of smart failure could not
> be triggered by the controller, and rather it's a drive fault, because
> the tests are run wholly within the drive itself and all that goes
> between drive and computer is the request to start the test, and the
> test results?

yep, the smart tests are run on the drive itself.

hope you've got a backup.

> My thoughts are that besides the bitching of the Marvell virtual ATA
> device (which perhaps was passing stuff through to the 1TB device),
> all the errors could be attributed to the now failed drive?

possibly. but IIRC you said it was whinging about the DVD drive for ages
too.

craig

-- 
craig sanders 

BOFH excuse #447:

According to Microsoft, it's by design
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: Open hardware (Was: Castrated netbook)

2016-01-22 Thread Craig Sanders via luv-main
On Sat, Jan 23, 2016 at 09:13:28AM +1100, Joel W. Shea wrote:
> While its at a higher performance/price point than the netbooks and

sorry,  but this is just one of the language things that bug me.

why do people say "price point" when they mean "price"?

Is it because prices are somehow vulgar, and you can diminish the
vulgarity by making it not a "price" but a "price point"?

"price point" has a very specific meaning in economics, and it's not
synonym for "price".

craig

ps: followups should go to luv-talk if you're subscribed.

-- 
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: Mail Server Really Slow

2016-01-19 Thread Craig Sanders via luv-main
On Tue, Jan 19, 2016 at 11:11:22PM +, James Harper wrote:
> (it seems that "reply-all" no longer includes luv-main (from ms
> outlook at least), so I have to include it manually... what's with
> that?)

who knows? outlook is weird.

for list replies, it's better to just reply to the list without CC-ing
everyone anyway. i don't care much either way (i have procmail and i'm
not afraid to use it :), but some people really dislike getting dupes.

> > Of course a RAID-1 of SSDs will massively outperform the RAID-5 you
> > have.
>
> If you use SSDs for any sort of intensive storage, do keep an eye on
> the SMART "media wearout" values, and replace them before the counter
> hits 0 (or 1).

the only related value i can find on 'smartctl -a' on my 256GB OCZ
Vertex is:

  233 Remaining_Lifetime_Perc 0x   067   067   000Old_age   Offline 
 -   67

I assume that means I´ve used up about 1/3rd of its expected life.  Not
bad, considering i've been running it for 500 days total so far:

9 Power_On_Hours  0x   100   100   000Old_age   Offline 
 -   12005

12005 hours is 500 days.  or 1.3 years.

and over that time, i've read 17.4TB and written 11.9TB. on a 256GB
SSD...equivalent to rewriting the entire drive 46 times. or approx 23GB
of writes per day.

  198 Host_Reads_GiB  0x   100   100   000Old_age   Offline 
 -   17440
  199 Host_Writes_GiB 0x   100   100   000Old_age   Offline 
 -   11901

I can expect probably another 2.5 years from this SSD or so at my
current/historical usage rates. by that time, i'll be more than ready to
replace it with a bigger, faster, and cheaper M.2 SSD.

and that's for an OCZ Vertex, one of the last decent drives OCZ made
before they started producing crap and went bust (and subsequently got
bought by Toshiba, who are now producing decent drives again under the
OCZ brand name).so relatively old technology compared to modern
SSDs.

I'd expect a modern Intel or Samsung (or OCZ) to have an even longer
lifespan.

according to 
http://www.anandtech.com/show/8239/update-on-samsung-850-pro-endurance-vnand-die-size

the 256GB Samsung 850 Pro has an expected lifespan of 70 years with
20GB/day writes or 14 years with 100GB/day writes.

The 512GB model doubles that and the 1TB quadruple it.

even if you distrust the published specs and regard them as marketing
dept. lies, and discount them by 50% or even 75%, you're still looking
at long lives for modern SSDsmore than long enough to last until the
next upgrade cycle for your servers.




So, yes, keep an eye on the "Remaining_Lifetime_Percentage" or "Wear
Level Count" or whatever the SMART attribute is called on your
particular SSD, but there's no need to worry too much about it unless
you're writing 1TB/day or so (and even then it should last around 3.5
years).



> I'm seeing time-to-replacement of about 12 months on high load
> system where the SSD's are used for a RAID cache (ZFS, Intel RAID
> controllers, etc).

12 months?  how much are you writing to those things each day?


BTW, my OCZs are partitioned and used for OS and /home and ZFS L2ARC and
ZFS ZIL.

i would consider usage to be fairly light, not heavy. the heaviest usage
it suffers would be compiling stuff and the regular upgrades of debian
sid.

> Not particularly relevant to the discussion at hand, but with
> suggestions of "put in SSD's and all your trouble will go away", it is
> something you need to consider.

The endurance issues that SSDs suffered in the past are basically gone
now.

craig

-- 
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: Mail Server Really Slow

2016-01-20 Thread Craig Sanders via luv-main
On Wed, Jan 20, 2016 at 07:28:38AM +, James Harper wrote:
> As long as I remember to replace the To: with luv-main each time I
> reply, I guess it's workable.

that happens even on just plain Replies, too - not just Reply-All?

that's weird because the list munges the From: address, so a reply
should go to the list.


> >   233 Remaining_Lifetime_Perc 0x   067   067   000Old_age   Offline 
> >  -
> > 67
>
> 233 is reported as Media Wearout Indicator on the drives I just
> checked on a BSD box, so I guess it's the same thing but with a
> different description for whatever reason.

i dunno if that name comes from the drive itself or from the smartctl
software. that could be the difference.

> > I assume that means I´ve used up about 1/3rd of its expected life.  Not
> > bad, considering i've been running it for 500 days total so far:
> > 
> > 9 Power_On_Hours  0x   100   100   000Old_age   Offline 
> >  -
> > 12005
> > 
> > 12005 hours is 500 days.  or 1.3 years.
> 
> I just checked the server that burned out the disks pretty quick last
> time (RAID1 zfs cache, so both went around the same time), and it

i suppose read performance is doubled, but there's not really any point
in RAIDing L2ARC. it's transient data that gets wiped on boot anyway.
better to have two l2arc cache partitions and two ZIL partitions.

and not raiding the l2arc should spread the write load over the 2 SSDs
and probably increase longevity.


my pair of OCZ drives have mdadm RAID-1 (xfs) for the OS + /home and
another 1GB RAID1 (ext4) for /boot, and just partitions for L2ARC and
ZIL. zfs mirrors the ZIL (essential for safety, don't want to lose the
ZIL if one drive dies!) if you give it two or more block devices anyway,
and it uses two or more block devices as independent L2ARCs (so double
the capacity).


$ zpool status export -v
  pool: export
 state: ONLINE
  scan: scrub repaired 0 in 4h50m with 0 errors on Sat Jan 16 06:03:30 2016
config:

NAMESTATE READ WRITE CKSUM
export  ONLINE   0 0 0
  raidz1-0  ONLINE   0 0 0
sda ONLINE   0 0 0
sde ONLINE   0 0 0
sdf ONLINE   0 0 0
sdg ONLINE   0 0 0
logs
  sdh7  ONLINE   0 0 0
  sdj7  ONLINE   0 0 0
cache
  sdh6  ONLINE   0 0 0
  sdj6  ONLINE   0 0 0

errors: No known data errors

this pool is 4 x 1TB. i'll probably replace them later this year with
one or two mirrored pairs of 4TB drives.  I've gone off RAID-5 and
RAID-Z.  even with ZIL and L2ARC, performance isn't great, nowhere near
what RAID-10 (or two mirrored pairs in zfs-speak) is.  like my backup pool.

$ zpool status backup -v
  pool: backup
 state: ONLINE
  scan: scrub repaired 0 in 4h2m with 0 errors on Sat Jan 16 05:15:20 2016
config:

NAMESTATE READ WRITE CKSUM
backup  ONLINE   0 0 0
  mirror-0  ONLINE   0 0 0
sdb ONLINE   0 0 0
sdi ONLINE   0 0 0
  mirror-1  ONLINE   0 0 0
sdd ONLINE   0 0 0
sdc ONLINE   0 0 0

errors: No known data errors

this pool has the 4 x 4TB Seagate SSHDs i mentioned recently.  it stores
backups for all machines on my home network.


> > and that's for an OCZ Vertex, one of the last decent drives OCZ made
> > before they started producing crap and went bust (and subsequently
> > got

sorry, my mistake.  i meant OCZ Vector.

sdh OCZ-VECTOR_OCZ-0974C023I4P2G1B8
sdj OCZ-VECTOR_OCZ-8RL5XW08536INH7R


> I've seen too many OCZ's fail within months of purchase recently, but
> not enough data points to draw conclusions from. Maybe a bad batch or
> something? They were all purchased within a month or so of each other,
> late last year. The failure mode was that the system just can't see
> the disk, except very occasionally, and then not for long enough to
> actually boot from.

i've read that the Toshiba-produced OCZs are pretty good now, so
possibly a bad batch. or sounds like you abuse the poor things with too
many writes.

even so, my next SSD will probably be a Samsung.

> Yep. I just got a 500GB 850 EVO for my laptop and it doesn't have
> any of the wearout indicators that I can see, but I doubt I'll get
> anywhere near close to wearing it out before it becomes obsolete.

that's not good. i wish disk vendors would stop crippling their SMART
implementations and treat it seriously.


craig

-- 
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
http://lists.luv.asn.au/listinfo/luv-main


Re: ata errors in dmesg/syslog - any pointers from the more ATA/AHCI literate?

2016-01-19 Thread Craig Sanders via luv-main
On Tue, Jan 19, 2016 at 11:35:06PM +1100, Anthony Hogan wrote:
> I have my system (Gigabyte P55A-UD4 r1 F15 firmware) configured in AHCI
> mode with a 1+3 TB HDDs, and a DVD drive.
> 
> ata5 = 1TB SATA (msdos partition scheme)
> ata9 = DVD SATA
> ata10 = 3TB SATA (GPT scheme to use 3TB as one, but not a boot drive)
> ata16 = Appears to be virtual "Marvell" device
> 
> (I am not using hardware or fake RAID, and have disabled eSATA port)
> 
> I have two 6Gbps SATA ports, and the rest are 3 I think. One of the 6GB
> ports goes to the 3TB drive, the other goes to a docking bay in the
> case that I use to do occasional offline backups.
> 
> The kernel seems to be bitching about the DVD drive all the time (and
> sometimes my 1TB drive), and I'm not quite sure why. Buggy AHCI? Crappy
> SATA cables? Something else? Used AHCI mode for ages, but only with
> upgrade to Ubuntu version that I've noticed more bitching.
> 
> The other day, my 1TB drive got pushed into read only mode on boot - I
> did several SMART tests and an fsck from a live environment and nothing
> out of the ordinary came up (I have recently installed Windows 10 in
> dual boot, but prior to that had Windows 7 - Windows didn't touch grub
> at all).

some things to try:

0. stating the obvious, but maybe try a different kernel. you don't
mention what version of ubuntu you're running (the latest?) but you
could see if there's an updated kernel for it.

if there isn't one, you could use the liquorix kernel (latest
liquorix version which runs on both debian and ubuntu is
linux-image-4.3-3.dmz.6-liquorix-amd64. i've installed that
but haven't got around to rebooting yet, so am still running
linux-image-4.3-3.dmz.2-liquorix-amd64).

http://liquorix.net/


1. disable AHCI in the BIOS.  

2. disable AHCI in the BIOS and move both SATA drives onto the Marvell
6Gbps ports and use the sata_mv driver (which is in the mainline kernel,
has been for years).  The driver has a few parms that might be worth
reading about and experimenting with.

$ modinfo sata_mv | grep -v alias
filename:   
/lib/modules/4.3-3.dmz.2-liquorix-amd64/kernel/drivers/ata/sata_mv.ko
version:1.28
license:GPL
description:SCSI low-level driver for Marvell SATA controllers
author: Brett Russ
srcversion: DD7FD903CFF2406E08B557D
depends:libata
intree: Y
vermagic:   4.3-3.dmz.2-liquorix-amd64 SMP preempt mod_unload modversions 
parm:   msi:Enable use of PCI MSI (0=off, 1=on) (int)
parm:   irq_coalescing_io_count:IRQ coalescing I/O count threshold 
(0..255) (int)
parm:   irq_coalescing_usecs:IRQ coalescing time threshold in usecs 
(int)

According to http://www.gigabyte.com.au/products/product-page.aspx?pid=3436#sp 
the marvell ports should be labelled GSATA3_6 and GSATA3_7


NOTE: I'm not a fan of marvell sata (i had problems with them in the
distant past but the bugs have probably been fixed long ago, and they're
not exactly high-performance controllers, they're decidedly low-budget
stuff), but it's worth a try.

the DVD can stay where it is on one of the 3Gbps SATA ports.


3. Check all other settings in the BIOS and make sure they're reasonably
sane. Since BIOS features and options tend to be extremely badly
documented (if at all) this isn't as easy as it sounds. unless you have
another computer with internet access nearby you be able to google
any of the more obscure settings from the BIOS. and the "helpful
descriptions" of the options in the bios screen are generally neither
helpful nor descriptive.


5. Buy a 2 or 4 port PCI-e SATA 3 6Gbps card using a known good
chipset. The trouble is that it's difficult to know what you're getting
- many of the cheaper cards use marvell chips anyway (and probably older
versions than what's on your m/b).

if you want to be certain you're getting something good and don't mind a
bit of overkill for the task at hand, look for one of the LSI 2008 SAS
controllers.  there are many brands with re-badged versions, including
the IBM M1015 which typically sell for around $100 on ebay for an 8-port
card - I have three of these and they're great.  OTOH, $100-ish is not too
far off what the cheapest new Haswell CPU + m/b would cost.


5. which brings us to the final option: replace the m/b and cpu. The
cheapest possible replacement would be a G1840 CPU for around $57 and an
Asrock H81M-DGS motherboard (with 2xSata 6Gbps and 2xSata 3Gbps ports)
for $69. the LGA1150 and LGA1156 both use DDR3 so no need to get new
ram. total would be around $126. for $20 more you could get the Asrock
B85M Pro3 which has 4 x Sata 6Gbps and 2 x Sata 3Gbps.

dunno what CPU you've got in your current board or how it compares to
the G1840but swapping the m/b should not only fix your current
problem by getting rid of the problematic hardware, it would also give
you a viable upgrade path for more RAM and better CPUs (all the way up
to Xeon CPUs like the E3-1241V3)...whereas LGA1156 is dead and buried by
now.


Re: zfs vs. recent kernels

2016-08-10 Thread Craig Sanders via luv-main
On Mon, Aug 08, 2016 at 09:23:56PM +1000, russ...@coker.com.au wrote:
> On Sunday, 7 August 2016 1:58:25 AM AEST Robin Humble via luv-main wrote:
> > has anyone else had issues with ZFS on recent kernels and distros?
>
> Debian/Jessie (the latest version of Debian) is working really well
> for me.  Several systems in a variety of configurations without any
> problems at all.

me too, no problems on sid with 4.6.x kernels and zfs-dkms 0.6.5.7-1
on several different machines.

i recently upgraded my main system from 16GB to 32GB, but that was
because I started using chromium again and it really uses a LOT of
memory. leaks a lot too.  I took the opportunity to tune zfs_arc_min &
zfs_arc_max to 4GB & 8GB (they had been set to 1 & 4GB), and have zswap
configured to use up to 25% of RAM for compressed swap.

> > BTW this is on fedora 24 with root on ZFS, but it sounds like
> > ubuntu has similar issues. symptoms feel like a livelock in some
> > slab handling rather than an outright OOM. there's 100% system
> > time on all cores, zero fs activity, no way out except to reset.
> > unfortunately root on ZFS on a laptop means no way that I can think
> > of to get stack traces or logs :-/

syslog over the LAN?  serial console?

> For the laptops I run I use BTRFS.  It gives all the benefits of ZFS
> for a configuration that doesn't have anything better than RAID-1 and
> doesn't support SSD cache (IE laptop hardware) without the pain.

I'm probably going to do this when i replace my boot SSDs sometime in
the nearish future (currently mdadm raid-1 partitions for / and /boot,
with other partitions for swap, mirrored ZIL, and L2ARC).

I'd like to use zfs for root (i'm happy enough to net- or usb- boot
a rescue image with ZFS tools built-in if/when i ever need to do any
maintenance without the rpool mounted) except for the fact that ZFS is
only just about to get good TRIM support for VDEVs.  If it's ready and
well-tested by the time i replace my SSDs, I may even go ahead with
that. being able to use 'zfs send' instead of rsync to backup the root
filesystems on all machines on my LAN will be worth it.


speaking of which, have you ever heard of any tools that can interpret
a btrfs send stream and extract files from it? and maybe even merge in
future incremental streams? in other words, btrfs send to any filesystem
(including zfs). something like that would make btrfs for rootfs and zfs
for bulk storage / backup really viable.

I need a good excuse to start learning Go, so i think i'll start playing
with that idea on my ztest vm (initially created for zfs testing but now
has the 5GB boot virtual disk + 12 x 200MB more disks for mdadm, lvm,
btrfs, and zfs testing). BTW, there's a bug in seabios which causes a VM
to lock up on "warm" reboot if there's more than 8 virtual disks if you
have the BIOS boot menu enabledwhich is an improvement over what it
used to do, which was lock up even on initial "cold" boot.

it may not even be possible - the idea is based on fuzzy memories from
years ago that a btrfs send stream contains a sequence of commands
(and data) which are interpreted and executed by btrfs receive. IIRC,
the btrfs devs' original plan was to make it tar compatible, but tar
couldn't do what they needed so they wrote their own.


> ZFS is necessary if you need RAID-Z/RAID-5 type functionality (I
> wouldn't trust BTRFS RAID-5 at this stage), if you are running a
> server (BTRFS performance sucks and reliability isn't adequate for a
> remote DC), or if you need L2ARC/ZIL type functionality.

i made the mistake of using raidz when i first started using zfs years
ago. it's not buggy (it's rock solid reliable), it's just that mirrors
(raid1 or raid10) are much faster, and easier to expand. it made sense,
financially, at the time to use 4x1TB drives in raid-z1, but I'm only
using around 1.8GB of that, so I'm planning to replace them with either
2x2TB or 2x4TB. maybe even 4x2TB for better performance.

the performance difference is significant my "backup" pool has two
mirrored pairs, while my main "export" pool has raid-z. scrubs run at
200-250MB/s on "backup", and around 90-130MB/s on "export".


i also use raidz on my mythtv box. performance isn't terribly important
on that, but storage capacity is. even so, mirrored pairs would be
easier to upgrade than raidz - cheaper too, because I only have to
upgrade a pair at a time rather than all four raid-z members.  I have no
intention of replacing any until the drives start dying or 8+TB drives
are cheap enough to consider buying a pair of them.

craig

-- 
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Cheap Home Cloud ~ Raspberry Pi + USB Drives

2016-08-03 Thread Craig Sanders via luv-main
On Wed, Aug 03, 2016 at 01:07:24AM +1000, Russell Coker wrote:
> It's nice that you can get a case that can handle 8 disks for $200.

cases that handle 8+ disks are common enough, but 8 hot-swap bays (and 4
more internal drive bays) in a small case is amazing.

very nice.  i wish they'd been around years ago.  they're ideal
for DIY NAS, with plenty of room to grow.

> But as 6TB disks are affordable and 8TB disks are available hardly
> anyone needs more than 4 disks for a home server.

I don't know if i trust the 6 or 8TB drives yet.  maybe in a year or so.

oh, and don't use SMR drives like the Seagate "Archive" drives for
anything but read-mostly uses - occasional writes are OK, but frequent
writes (even typical desktop usage patterns) will be horribly slow. and
don't use them in raid or raid-like (btrfs,zfs) setups.

and size isn't everything. multiple smaller disks in RAID-1/10 will give
much better performance.  4x2TB RAID-10 will cost 10-20% more than 2x4TB
RAID-1 but will be much faster.

> If you want a server for serious performance then a couple of large
> SATA disks for main storage and a couple of NVMe devices in PCIe cards
> for ZIL and L2ARC would be the way to go.

btw, i don't have the link handy but i recently saw an announcement for
a PCI-e card that i've been hoping for ages that someone would make - a
PCI-e x16 card with four x4 NVMe slots on it.

a DIY NAS with built-in GPU and a single PCI-E slot could make good use
of that.

> In the long term I think that 2.5" disks won't be a useful option
> for many people.  Anyone who is designing a laptop now should design
> for M.2 except for the corner-case of >15" laptops.  Anyone who is
> designing a motherboard now should include 2*M.2 sockets on it.

i've got nothing against NVMe (in fact, it's one of the technologies I'm
keenly waiting to get better and cheaper), but they have different use
cases than 2.5" or 3.5" drives.

2.5" drives will be useful for years to come.  NVMe (esp. in raid-1)
is great for an OS drive and for bcache/l2arc/zil/etc but capacity is
limited due to the small size and, worse, pci-e lanes are limited (esp.
in Intel motherboards) so very few motherboards will have more than 1 or
2 M.2 slots.

i haven't yet seen any NVMe hot-swap capability either, which means
taking the system down to replace a drive.

SATA3 SSDs get up to around 550MB/s.  4 of them in RAID-1 or RAID-10
will get nearly 2GB/second read speed, and over 1TB/s write. and the
more mirrored pairs you add, the faster it gets.  That's fast enough
that it doesn't need an NVMe for caching.


> For all storage nowadays you either want the speed of SATAe (which
> is most easily realised with M.2) or the capacity and price of 3.5"
> disks.

part of my point was that SSDs are getting cheaper all the time.  In
the not too distant future, they will be cheap enough for home users
to build reasonably large arrays of 1-4TB SSDs, with much greater
performance, especially on random seeks or programs that open files
with O_SYNC or just fsync a lot.

It's easy and cheap to get hot swap bays that let you put four 2.5"
drives in a single 5.25" bay (or 12 in 3x5.25" bays if you don't like
the noise of little 40mm fans). also adapters to fit 2x2.5" in a 3.5"
bayṫ


> Finally instead of buying systems for such use getting them for free  
> is a better option for most people.   

yep, that's why i suggested scrounging.

craig

-- 
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: zfs vs. recent kernels

2016-08-11 Thread Craig Sanders via luv-main
On Thu, Aug 11, 2016 at 02:53:10AM -0400, Robin Humble wrote:
> >have you tried setting zfs_arc_min = zfs_arc_max? that should stop
> >ARC from releasing memory for linux buffers to use.
>
> is that what most folks do?

no idea. it just seems like something that's worth trying.

> as the SSD is fast at large reads (500MB/s), I could also just cache
> metadata and not data. would that make sense do you think?

it might help.  I used to do it and it didn't seem to do much, but
that was on a low memory system with only 1GB of ARC.  IIRC, I have
primarycache=metadata and secondarycache=all

you can set that per filesystem or zvol for both ARC and L2ARC with 'zfs
set'.

primarycache=all | none | metadata

Controls what is cached in the primary cache (ARC). If this
property is set to all, then both user data and metadata is
cached.  If this property is set to none, then neither user data
nor metadata is cached. If this property is set to metadata,
then only metadata is cached. The default value is all.

secondarycache=all | none | metadata

Controls what is cached in the secondary cache (L2ARC).  If this
property is set to all, then both user data and metadata is
cached.  If this property is set to none, then neither user data
nor metadata is cached.  If this property is set to metadata,
then only metadata is cached. The default value is all.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: zfs vs. recent kernels

2016-08-11 Thread Craig Sanders via luv-main
On Fri, Aug 12, 2016 at 12:38:41PM +1000, Craig Sanders wrote:
> > as the SSD is fast at large reads (500MB/s), I could also just cache
> > metadata and not data. would that make sense do you think?
> 
> it might help.  I used to do it and it didn't seem to do much, but
> that was on a low memory system with only 1GB of ARC.  IIRC, I have
> primarycache=metadata and secondarycache=all

i should have mentioned that this was on a system with lots of
background processes and cron jobs, many of which recurse directories,
so lots of metadata churn. e.g. running find on a large directory (like
a debian mirror) will do it.

btw, i guess you've already read http://open-zfs.org/wiki/Performance_tuning

also, this is a bit dated but still worth reading: 

https://pthree.org/2012/12/07/zfs-administration-part-iv-the-adjustable-replacement-cache/

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: rsync with multiple threads

2017-02-20 Thread Craig Sanders via luv-main
On Fri, Feb 17, 2017 at 06:25:38PM +1100, Joel W. Shea wrote:

> Are you maxing out your disk/network bandwidth already?

This is key, IMO, to whether running multiple rsyncs in parallel is
worth it or not. Almost all of the time, rsync is going to be I/O
bound (disk and network) rather than CPU bound - so adding more rsync
processes is just going to slow them all down even more.  A single rsync
process can saturate the disk and I/O bandwidth of most common disk
subsystems and network connections.

about the only time more rsync processes might help is if you're
transferring between two servers with SSD storage arrays via a
direct-connect 10+Gbps linkand even then, only if the disk + network
throughput is at least a few multiples of what a single rsync job (incl.
child processes for ssh and/or compression if any) can cope with.

or if the source AND destination of each of the multiple rsyncs are
on completely separate disks/storage-arrays so they don't compete
with each other for disk i/o. e.g. rsync from server1/disk1 to
server2/disk1 can run at the same time as an rsync from server1/disk2
to server2/disk2...especially if you can use separate network interfaces
for each rsync.


splitting up the transfer into multiple smaller rsync jobs to be
run consecutively, not simultaneously, can be usefulespecially
if you intend to run the transfers multiple times to get
new/changed/deleted/etc files since the last run.  There's a lot
of startup overhead (and RAM & CPU usage) with rsync on every run,
comparing file lists and file timestamps and/or checksums to figure
out what needs to be transferred.  Multiple smaller transfers (e.g. of
entire subdirectory trees) tend to be noticably much faster than one
large transfer.

in other words, multiple parallel rsyncs is usually a false
optimisation.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: pdns security update (was Re: NBN satelite setup)

2016-09-16 Thread Craig Sanders via luv-main
On Fri, Sep 16, 2016 at 01:12:07AM -0700, Rick Moen wrote:

> _But_ that is completely unrelated to pdnsd.

ah, my mistake.  i assumed he was talking about powerdns.

> http://linuxmafia.com/faq/Network_Other/dns-servers.html

good page that, i've read it before but not for some time. IMO a useful
addition to it would be a list of authoritative servers that use bind9
RFC-1034 zonefiles.

apart from "it aint broke, why fix it?" laziness, one of the reasons i'm
still using bind9 is because I don't want to rewrite my zone files in
a new format (or even have to learn a new format), and I haven't been
overly happy with the few alternatives I've tried that could use bind
zonefiles.

 - powerdns is serious overkill for my needs (home server with only a
   few domains).

 - last time i looked at it (years ago, not long after it was released),
   there were some incompatibilities between NSD's interpretation of
   bind zonefiles and bind9's interpretation.  Also, I didn't want to
   have to run two name servers (internet-facing authoritative and
   private LAN recursive) - although dnsproxy or similar could solve
   that problem now. it's probably worth another look.

 - maradns provides a conversion tool for bind zonefiles, but doesn't use
   them natively.  otherwise, i'd probably switch to it.   I've used it
   several times on gateway boxes i've built for other people.


craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


pdns security update (was Re: NBN satelite setup)

2016-09-16 Thread Craig Sanders via luv-main
On Wed, Sep 14, 2016 at 07:10:43AM +1000, zlin...@virginbroadband.com.au wrote:
> I am using pdnsd 

FYI, I saw this DSA come in a few days ago:

https://www.debian.org/security/2016/dsa-3664

Debian Security Advisory
DSA-3664-1 pdns -- security update

Date Reported: 10 Sep 2016
Affected Packages: pdns 
Vulnerable: Yes

Security database references:
In the Debian bugtracking system: Bug 830808.
In Mitre's CVE dictionary: CVE-2016-5426, CVE-2016-5427, CVE-2016-6172.

More information:

Multiple vulnerabilities have been discovered in pdns, an
authoritative DNS server. The Common Vulnerabilities and Exposures
project identifies the following problems:

CVE-2016-5426 / CVE-2016-5427

Florian Heinz and Martin Kluge reported that the PowerDNS
Authoritative Server accepts queries with a qname's length
larger than 255 bytes and does not properly handle dot inside
labels. A remote, unauthenticated attacker can take advantage of
these flaws to cause abnormal load on the PowerDNS backend by
sending specially crafted DNS queries, potentially leading to a
denial of service.  

CVE-2016-6172

It was reported that a malicious primary DNS server can crash a
secondary PowerDNS server due to improper restriction of zone
size limits. This update adds a feature to limit AXFR sizes in
response to this flaw.

For the stable distribution (jessie), these problems have been fixed
in version 3.4.1-4+deb8u6.

We recommend that you upgrade your pdns packages.


craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Can anyone make suggestions for training / learning resources wrt databases.

2016-09-17 Thread Craig Sanders via luv-main
On Sun, Sep 18, 2016 at 10:54:06AM +1000, h wrote:
> There are two areas I have been unable to find information on:
> 
> *  Writing sql / script files

An sql script file is just a bunch of sql commands in a sequence.

There are numerous ways to run such a script, including piping or
redirecting it into /usr/bin/psql, using `\i scriptname.sql` from within
an existing psql session, writing a simple perl DBI wrapper (or similar
in python or whatever) to connect to the DB and start issuing SQL
commands, etc.

The latter is most useful if you want conditional execution of commands
depending on the results of previous commands, but don't want to write
in a db-specific language like pl/sql (note: postgresql, at least, has
options to embed the perl, python, lua, sh, and/or tcl languages into
the postgres server itself so you can write stored procedures in those
languages too).

It's usually best to run sql scripts wrapped in a transaction, so that
either all of the commands in the script succeed or they all get rolled
back as if they'd never happened.

What else do you need to know?

> *  Structuring tables either within a database, or on a server

That's a much more complex topic.  There's no set answer, it depends
entirely on your data and how you intend to use it.

Wikipedia has a set of articles on this topic, which serve as a great
introduction to the concepts.  I'd start with:

https://en.wikipedia.org/wiki/Database_normalization

Read that, and then follow your interests with the links at the bottom.
It won't tell you everything you need to know, but a few hours reading
will at least teach you enough to know what to search for.

Stack Exchange (SE) also has a site dedicated to db-related questions
and answers, at http://dba.stackexchange.com/

http://dba.stackexchange.com/help/on-topic says:

dba.se is for those needing expert answers to advanced database-related
questions concerning traditional SQL RDBMS and NoSQL alternatives.

If you have a question about...

* Database Administration including configuration and backup / restore
* Advanced Querying including window-functions, dynamic-sql, and 
  query-performance
* Data Modelling and database-design, including referential-integrity
* Advanced Programming in built-in server-side languages including
  stored-procedures and triggers.
* Data Warehousing and Business Intelligence including etl, reporting,
  and olap

...then you're in the right place to ask your question!


Even if you don't post a question yourself, there are lots of good
questions and answers in an easy to find format (unlike a forum site,
you won't have to wade through page after page of inconsequential chat,
bickering, ill-informed nonsense, obsolete information etc just to find
the few hidden gems of useful information)



> The particular problems I have are:
> 
> * I regularly update my tables from multiple csv files, all residing
>   in the same folder. Currently I have a script with a hardwired path
>   for each csv file. I would like to have a single 'variable' I could
>   change to define the path to all the csvs.

No matter which language the script is written in, the best solution
for this is to use getopts to process command line option.  Even in sh
or bash, it's a lot easier than you might think to get good option and
argument processing, just like "real programs" :)

e.g. your script could have a '-p pathname' or '--path pathname'
command-line option.

Alternatively (or in addition), it could get the path from an
environment variable - e.g. if your script is called myscript, then
set and export 'MYSCRIPT_CSV_PATH' in your environment any time before
running it.  You could then have:

 - a hard-coded default
 - which can be overridden by the environment variable
 - which can be overridden by -p or --path on the command-line.


For Bourne-like shells (ash, dash, bash, ksh, etc. even zsh), you have
the choice of either:

 - built in getopts (can only do short single-character options)
 - getopt from util-linux (can do both short and --long options)

I wrote an example back in June, showing/comparing how to use both at:

http://unix.stackexchange.com/a/287344/7696

NOTE: if you use getopt, use ONLY the version from util-linux.  Most
(all?) other versions have serious flaws and are dangerous to use.

For perl, use Getopt::Std for short options, or Getopt::Long for both
short and long.  There's also Getopt::ArgParse, which implements
something a lot like python's argparse in perl.

argparse is probably overkill for your needs but it's worth knowing
about because it's an easy way to implement sub-commands (e.g. like git,
which has numerous subcommands, like 'git add', 'git commit', 'git log',
and many more, each with their own set of options and args)

For python, there's getopt which provides short and long options. if you
need something fancier, use argparse or maybe gflags.

There may be the odd 

system monitoring (was Re: ZFS error logging)

2016-09-23 Thread Craig Sanders via luv-main
On Fri, Sep 23, 2016 at 04:06:42PM +1000, russ...@coker.com.au wrote:
> The Nagios model is to have a single very complex monitoring system while
> the mon model tends towards multiple simple installations.  Nagios has a
> nrpe daemon on each monitored server while with Mon you have Mon on each
> server and a master Mon monitoring them all.

and for logging and graphing all sorts of info about systems (disk space,
memory utilisation, cpu load, network traffic etc) and the services they're
running (e.g. postgres/mysql query load, VMs/containers running), munin
isn't bad.

some prefer cricket or cacti or still use the ancient mrtg, but I find
munin's easier to set up and write plugins for (e.g. a simple plugin I
wrote was a small sh + awk script to query slurm to graph the list of
running, cancelled, failed, queued, etc jobs for a HPC cluster)

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: pdns security update (was Re: NBN satelite setup)

2016-09-17 Thread Craig Sanders via luv-main
On Fri, Sep 16, 2016 at 03:27:38PM -0700, Rick Moen wrote:
> > good page that, i've read it before but not for some time. IMO a
> > useful addition to it would be a list of authoritative servers that
> > use bind9 RFC-1034 zonefiles.
>
> You know, they kind of _could_ have called that format the RFC-1034
> file

typo. i actually meant to type 1035 there, and thought i did.

> Anyway, yes, good idea -- and I actually do document RFC 1035 support
> where I know about it.

yep, saw that which is what gave me the idea for a summary list.

> Here's a creative solution from one of the NLnet Labs guys:
> https://www.nlnetlabs.nl/pipermail/nsd-users/2014-August/001998.html

I saw that last night.  It made me realise that probably the best option
for me would be to have NSD listen on 203.16.167.1 while Unbound listens
on 192.168.10.1 (I run both private and public subnets on my LAN so
I can have both private and public hosts and VMs).  Then all I'd have
to do is configure my LAN hosts and VMs to use 192.168.10.1 as the
resolver. Easy.

Unbound seems to have all the features I need, including being able to
forward requests for specific domains to specific servers (useful, e.g.,
for resolving private DNS views over a VPN).

> Other solutions might beckon if the host is multihomed, e.g., bind NSD
> to the public-facing real IP, and bind Unbound to the private RFC1918
> address.

err, yes. exactly that.


> I'm tempted to react 'Fine, let us know when you're done playing
> standards gods, and I'll start paying attention.'

I mostly just leave things alone and then every 2 or 5 years or so go on
a binge of updating everything to the latest standards.

Unless I'm bored, or have a particular reason to make changes.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Hardware for kids

2016-08-24 Thread Craig Sanders via luv-main
On Tue, Aug 23, 2016 at 11:09:26AM +1000, Paul van den Bergen wrote:
> the answer to the question "is it possible to install Linux" is always
> yes...

i wouldn't be so sure about that.

The "SecureBoot" spec for tablets, laptops etc with ARM CPUs doesn't
allow installation of custom keys, and doesn't allow the owner to
disable it.

Some suspect that's a trial run for what MS-NSA-RIAA intends to require
of x86 PC & Motherboard manufacturers in the not-too-distant future.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: How to make systemd more reliable

2016-09-30 Thread Craig Sanders via luv-main
On Thu, Sep 29, 2016 at 06:14:49PM +1000, russ...@coker.com.au wrote:
> On Thursday, 29 September 2016 3:20:37 PM AEST Paul van den Bergen via
> luv-main wrote:
>
> > I'm going to be critical here - it is rare that you have personal choice
> > over the tools your system uses.

i haven't found it to be that rare. but then, i've always preferred jobs where
i'm going to use (or can choose to use) the tools I like to use...or, even
better, use my preferred tools and learn some new good ones.

> > Do the job in front of you. If that means you support windows ME as a
> > security portal(!), that's what you do... at least until you find a better
> > job.

or if your bosses are that stupid to suggest using Win ME for a task like
that and you can't talk them out of it (up to and including asking them to
acknowledge in writing that you've advised against it and that they accept
full responsibility for the consequences of their decision), you can quit and
leave them to their self-inflicted demise.

> We have choices, but choices have to be backed by work - which is the hard
> part.
>
> People who have chosen systemd have spent a lot of time making it work
> better and solving some real problems that other init systems have had for
> many years.  People who want to choose SysVInit have spent a lot of time
> flaming people who write the code.

it's nowhere near as simple as that. there are loons on both sides of the
pro- and anti- systemd debate.  AFAICT, the anti-systemd loons tend to be
nastier (although the pro-systemd loons aren't blameless either), while the
pro-systemd loons seem to be more stupid, as well as ignorant ignorant about
anything but solitary, single user desktop and laptop systemsbut i haven't
paid much attention to the debate for ages, i just don't care what other
people want to use, as long as they don't try to force their choices on me.

AFAICT, the majority of the anti-systemd side (the non-loons) think that
systemd as an init system and even as a control groups manager (with or
without its container stuff) is fine or even good. the objection to systemd
from most people has nothing to do with that.

The objection is about all the other stuff that system tries to do (and
generally does a crappy, half-arsed job of). it's a hostile takeover of
functionality that other programs do better, and is reminiscent of microsoft's
"embrace and extend" practice and policy for destroying competition.

Closely tied to that is the related objection of how tightly integrated all
those extra things are - you can't easily mix and match the best tools for a
particular job (dns, cron, logging, device control, etc).

If you're relying on distro packages, and you're lucky (as in debian), the
distro doesn't enable all the extras by default and provides decent workaround
(like defaulting journald to use log to syslog. personally, i'd rather not
have journald running at all, but that's at least a workable compromise).

if you're not lucky, you either take all of systemd or compile your own
without the stuff you don't want (losing benefit of distro upgrades for
systemd), or you switch to something else like openrc (and then find you
have to run most of systemd anyway because lots of stuff has unwarranted and
unreasonable dependencies on it)

anyway, systemd's borging of every function it possibly can will inevitably
lead to the death of innovation in linux and bring about a software
monoculture (nothing will be able to compete with it because in order to do
so, a competitor will have to replicate and replace every thing it does, not
just be better at one or two things). and moncultures are *always* unhealthy.

In short, the price for systemd's one or two nice (but not unique) features is
far too high.

that's my position on systemd anyway, and it seems not an uncommon one.

BTW, i'm no huge fan of sysvinit, but it works well enough (it's certainly not
that bad that it requires replacement by something that doesn't know where it
should stop - especially when there are other alternatives that don't try to
take over everything and don't actively stifle competition), doesn't try to do
extra crap that an init system has no business doing, and I don't find shell
scripts at all scary - that bogeyman is even more laughable than systemd's
'it boots really fast "feature"' (but not noticable faster than anything else
when you take into account the fact that most of the boot time is BIOS and
adaptor card ROMs. you also need to not count the annoying 90 second to 5
minute delays while it tries to mount non-existent filesystems or connect
to non-existent networks etc - e.g. i had to wait over 5 minutes today for
systemd on my laptop to give up on trying to connect to a network when it
wasn't even plugged in to one and wifi was deliberately disabled - and if
there is actually some way to tell it to give up and move on, it's certainly
not obvious).

personally, i think openrc would have been a better choice than systemd for
debian 

Re: How to make systemd more reliable

2016-09-30 Thread Craig Sanders via luv-main
On Fri, Sep 30, 2016 at 02:38:54PM +1000, russ...@coker.com.au wrote:
> > I'm sure you're aware that this variety of rhetoric suffers a rather
> > serious 'if so, so what?' problem (residing somewhere among the
>
> It's "if so don't deal with those people" as so many people have done.
> There are more than a few DDs who have nothing to do with SysVInit
> because of the people who they have to deal with if they choose to do
> so.  

the dominant majority complaining about how hard done by they are
makes me think, "yeah #ALLinitsystemsmatter"

> Why go to the effort of supporting software if there is a better
> alternative that has the added benefit of avoiding assholes?

one of the promises given to soothe those who were concerned about
sysvinit etc being ignored or deprecated after the init system
vote in debian was that systemd would be the default, but sysvinit
and others would continue to be supported.

as predicted (but dismissed as needless paranoia at the time), other
init systems ARE being deprecated and a few DDs (not many yet, but i
don't expect that to last forever) are deliberately dropping sysvinit
(etc) support and ignoring or rejecting patches to add such support.


> To be fair the haters have had some success in making developers cease 

is that really what you think "being fair" constitutes? an
"acknowledgement" that the opposite side are actually quite good at
being evil?

> But really we need more features nowadays.

features that aren't unique to systemd.



> > My current idea of a good system composite is a really tiny, minimal
> > PID1 (leaning towards BusyBox[1]) spawning OpenRC as the init
> > system.  If I ever actually need service supervision, I'd probably
> > use runit or supervisord on whatever daemons merit such supervision.
>
> If you want a tiny minimal init then having one that is linked with
> cp, mv, etc probably isn't the way to go.  It would be ideal if the
> Busybox build system supported splitting some utilities out into
> separate binaries.

but this is what i really wanted to respond to. bash now supports
loadable built-in commands that run without needing to fork an external
command (e.g. like the standard built-ins echo, printf, kill, etc), so
are fast (avoiding the fork overhead) on the command-line or in a script.

the bash-builtins package in debian comes with headers and source examples,
and a bunch of loadable built-ins:

/usr/lib/bash/basename
/usr/lib/bash/dirname
/usr/lib/bash/finfo
/usr/lib/bash/head
/usr/lib/bash/id
/usr/lib/bash/ln
/usr/lib/bash/logname
/usr/lib/bash/mkdir
/usr/lib/bash/mypid
/usr/lib/bash/pathchk
/usr/lib/bash/print
/usr/lib/bash/printenv
/usr/lib/bash/push
/usr/lib/bash/realpath
/usr/lib/bash/rmdir
/usr/lib/bash/setpgid
/usr/lib/bash/sleep
/usr/lib/bash/strftime
/usr/lib/bash/sync
/usr/lib/bash/tee
/usr/lib/bash/truefalse
/usr/lib/bash/tty
/usr/lib/bash/uname
/usr/lib/bash/unlink
/usr/lib/bash/whoami

with a few more (including tar, cp, mv, rm, and some others), busybox and
tinybox may soon be obsolete.


craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Orbital simulation

2016-09-26 Thread Craig Sanders via luv-main
On Mon, Sep 26, 2016 at 01:46:29PM +1000, Paul van den Bergen wrote:
> the biggest drawback for both the Niven ring and the Dyson sphere is
> there is no gravitational attraction inside the ring or sphere to the
> sphere - only towards the sun, or only on the outside

i'm surprised that nobody's mentioned Culture Orbitals yet.

https://en.wikipedia.org/wiki/Orbital_%28The_Culture%29

they're much smaller than a niven ring, they orbit a star (and rotate
independently) rather than encircle it - slightly tilted to get a day/night
cycle as the orbital rotates.  They have an AI Mind located at the "hub"


craig

ps: i want the culture to come rescue us humans - and our world - from
capitalism. also want neural laces, mind backups, body changing, robot
bodies, utopian technosocialism, AI ships with great names, and more.
after 40+ years of reading science fiction, the culture is the first SF
universe I actually want to live in.

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: How to make systemd more reliable

2016-10-02 Thread Craig Sanders via luv-main
[edited to put the interesting stuff at the top, and the boring stuff
at the bottom where it's easily ignored.]

On Sat, Oct 01, 2016 at 08:05:31PM +1000, russ...@coker.com.au wrote:
> Bash is still quite a bit bigger than busybox and links with a couple
> of libraries that busybox doesn't link with.  Systems which run
> busybox typically run a smaller shell than bash.

yes, but bash does a lot more, and is the lot nicer to use interactively. the
point of busybox is to combine primitive implementations of common utilities
with a primitive bourne-like shell in a single binary. the busybox shell
doesn't even have history recall and editing (which i consider to be essential
for any command-line work).

the difference between 600K and 1.2MB (or even 2MB or 3MB if a good subset
of GNU tar and other GNU tools - the rest of coreutils and find, to start
with, maybe sed and awk too) can be made loadable) is minimal, even on small
embedded systems these days, most have GBs of storage at least, and hundreds
of MB or even a few GB of RAM..

actually, much of the simple stuff you'd normally use sed for can be done in
bash anyway, these days. e.g. simple search and replace on variables, with
no need to fork sed or something: foo=${str/pattern/replacement} and other
variations (see http://tldp.org/LDP/abs/html/string-manipulation.html)

only simple shell glob patterns, rather than regex but that's good enough for
lots of things.




boring stuff below:



On Sat, Oct 01, 2016 at 08:05:31PM +1000, russ...@coker.com.au wrote:
> > the dominant majority complaining about how hard done by they are
> > makes me think, "yeah #ALLinitsystemsmatter"
> 
> There is no comparison between BLM and init systems.

i guess i'm not in the club of people who are allowed to make farcical
analogies relating some group of people to another, despicable group.

> The majority of Debian users don't care much which init system is in
> use.

the majority had no say in it, and probably aren't capable of switching
to something else if systemd doesn't meet their needs.


> > as predicted (but dismissed as needless paranoia at the time), other
> > init systems ARE being deprecated and a few DDs (not many yet, but
> > i don't expect that to last forever) are deliberately dropping
> > sysvinit (etc) support and ignoring or rejecting patches to add such
> > support.
>
> That's what happens when you have a war about something.  A lot of the
> energy that could be devoted to supporting other init systems is spent
> on the war

so it's OK to break promises because some (other) people said some mean
things somewhere along the line?

right.

i think what actually happened is that they knowingly lied just to get their
preferred option approved, and actually had no intention of enabling or even
allowing continued support of anything except systemd.

> and now everyone wants to just forget it.

actively discriminating against other inits is not "forgetting it"

it's fair enough to not make any personal effort to support something
you don't use or are not interested in...it's quite another to reject
out of hand someone else's contribution to add that support.

and "avoiding arseholes" is not a valid excuse - the areseholes aren't
the ones submitting patches or otherwise doing useful work. in fact,
deliberately rejecting such patches is likely to piss off some of those
arseholes, so it fails even at that.

> But you have the option to patch things and to run your own repository of
> patched packages if some DDs don't accept your patches.

that, to put it extremely mildly, is very far from optimal or even reasonable.

Debian is a Universal Operating System.  It's not just for those who like
particular packages or the most popular packages and "up yours" to everyone
else - that attitude, more than anything else, is why I am still resisting the
move to systemd.

i've encountered the attitude before, e.g. with djb-ware, and my warnings
about the dead-end nature of qmail back then proved to be exactly right.
systemd presents exactly the same kind of one-way conversion danger, once
you've switched it will be extremely difficult to switch to anything else

By inclination, i'm in the anything-but-systemd camp because systemd is
the only one that's actively hostile to other software that it sees as
competing with it (now or in future). anything else would be easy to
switch away from if it turns out to be a bad idea or have unforeseen
flaws. systemd won't be.

to me, that's far more important than a few minor improvements (none
of which are unique to systemd).

> > > To be fair the haters have had some success in making developers
> > > cease
> >
> > is that really what you think "being fair" constitutes? an
> > "acknowledgement" that the opposite side are actually quite good at
> > being evil?
>
> In this case yes.

i think you're missing something important about the meaning of the word
"fair".

specifically, "fair" does not ever mean "damning with faint praise"

> The 

Re: Configuration and automation query

2016-11-21 Thread Craig Sanders via luv-main
On Tue, Nov 22, 2016 at 11:52:03AM +1100, Peter Ross wrote:
> The default configuration may change, according to best practice(e.g.
> which encryption protocols are safe to use etc). so you are happy to
> use whatever the package provides (if it is well-maintained)
> 
> However, some things you may not like, say: the "PermitRootLogin yes" line.
> [...]

not sure what you're running (freebsd, i think) but here's what debian does:

When you upgrade a debian package that has a listed conffile, what happens
depends on whether either the installed version of the file has been changed 
from
the default and/or the package version has been changed from the previous 
version.

If the installed version of a conffile hasn't been changed (i.e. edited
in some way by the local admin) then:

 - if the package version hasn't changed, nothing happens
 - if the package version HAS changed, it replaces the current installed 
version.

If the installed conffile has been edited and the package version hasn't changed
then nothing happens.

If the installed conffile has been edited and the package version HAS changed, 
then
the user is asked what they want to do - the three main options are:

 - replace customised config file with package config file
 - keep custom config
 - view diff 

(you can also fork a shell or switch to another terminal and manually
diff and/or edit the relevant files, view man pages etc, before making a
decision)

anyway, if the user chooses to replace their custom config file, it is renamed
to filename.dpkg-old

if the user chooses to keep their custom config file, the packaged
version is renamed to filename.dpkg-dist

These renames allow you to easily diff the config files from the command
line and manually cherry-pick any changes.  They also make it easy to
change your mind later with just a simple mv to replace the current
config file with the .dpkg-old or dpkg-dist version.  This is still
useful/convenient even when using revision control for your config
files.

> I wonder whether there is any better support from configuration
> management tools you are using.

Package: etckeeper
Version: 1.18.5-1
Description-en: store /etc in git, mercurial, bzr or darcs
 The etckeeper program is a tool to let /etc be stored in a git, mercurial,
 bzr or darcs repository. It hooks into APT to automatically commit changes
 made to /etc during package upgrades. It tracks file metadata that version
 control systems do not normally support, but that is important for /etc, such
 as the permissions of /etc/shadow. It's quite modular and configurable, while
 also being simple to use if you understand the basics of working with version
 control.
Homepage: http://etckeeper.branchable.com/


automated revision control of /etc and all subdirectories. nicely
integreated with apt to automatically commit changes before and after
any package installs/upgrades/removals (i think it also has similar
support for yum, and dnf). commits are also run nightly from cron. and,
of course, you can manually commit any file with a useful commit log
message whenever you want.

using etckeeper and git makes it even easier to cherry-pick config file
changes, and is useful for all files under /etc, not just those listed
in a package as a "conffile".

and the changelog and history that git (or whatever) gives you is as
useful for config files as it is for programming. plus you get all the
other benefits of git.

a long time ago, i used to manually use subversion on most systems (and
rcs before that), but having etckeeper available as a package means that
revision control of my config files just happens automatically on all my
systems whether i'm paying particular attention to it or not. and adding
a remote in git is easy, so I can push configs to any git repo...and
clone (and fork and merge and cherry-pick and diff and ...) them as
needed.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: How to make systemd more reliable

2016-10-11 Thread Craig Sanders via luv-main
On Tue, Oct 11, 2016 at 11:13:34PM +1100, russ...@coker.com.au wrote:
> On Tuesday, 11 October 2016 10:30:01 PM AEDT Craig Sanders via luv-main wrote:
> > I was rebooting anyway in order to replace a failed SSD on one
> > machine and convert both of them to root on ZFS.  It booted up OK on
> > both, so I made it the default. If it refrains from sucking badly
> > enough to really piss me off for a decent length of time, i'll leave
> > it as the default.
>
> That's a bold move.  

switching my main system to systemd? yes, i know. very bold. very risky.
i'll probably regret it at some point.

:)

> While ZFS has been totally reliable in preserving data I have had
> ongoing problems with filesystems not mounting when they should.  

i know other people have reported problems like that, but it's never
happened on any of my zfs machines...and i've got most of my pools
plugged in to LSI cards (IBM M1015 reflashed to IT mode) using the
mp2sas driver - which is supposed to exacerbate the problem due to the
staggered drive spinup it does.

the only time i've ever seen something similar was my own stupid fault,
i rebooted and just pulled out the old SSD forgetting that I had ZIL
and L2ARC for the pools on that SSD.  I had to plug the old SSD back in
before I could import the pool, so i could remove them from the pool
(and add partitions from my shiny new SSDs to replace them).

> I don't trust ZFS to be reliable as a root filesystem, I want my ZFS
> systems to allow me to login and run "zfs mount -a" if necessary.

not so bold these days, it works quite well and reliably. and i really
want to be able to snapshot my rootfs and backup with zfs send rather
than rsync.

anyway, i've left myself an ext4 mdadm raid-1 /boot partition (with
memdisk and a rescue ISO) in case of emergency.

the zfs root on my main system is two mirrored pairs (raid-10) of
crucial mx300 275G SSDs(*). slightly more expensive than a pair of
500-ish GB but much better performanceread speeds roughly 4 x SATA
SSD read (approximating pci-e SSD speeds), write speeds about 2 x SATA
SSD.

i haven't run bonnie++ on it yet.  it's on my todo list.

http://blog.taz.net.au/2016/10/09/converting-to-a-zfs-rootfs/


the other machine got a pair of the same SSDs, so raid-1 rather than
raid-10. still quite fast (although i'm having weirdly slow scrub
performance on that machine. haven't figured out why yet. peformance
during actual usage is good, noticably better than the single aging SSD
I replaced).


(*) 275 marketing GB.  SI units.  256 GiB in real terms.

they're good value for money anywayi got mine for $108 each.  I've
since seen them for $97 (itspot.com.au).  MSY doesn't stock them for
some reason (maybe they want to clear their stock of MX200 models
first).

we're just on the leading edge of some massive drops in price/GB. a bit
earlier than I was predicting, i though we'd start seeing it next year.
wont be long before 2 or 4TB SSDs are affordable for home users (you can
get 2TB SSDs for around $800 now). and then I can replace some of my HDD
pools.

> I agree that those things need to be improved.  There should be a way
> to get to a root login while the 90 second wait is happening.

so there really is no way to do that?  i was hoping it was just some
trivially-obvious-in-hindsight thing that i didn't know.

it's really annoying to have to wait and watch those damn stars when you
just wnat to get a shell and start investigating & fixing whatever's
gone wrong.

> There should be an easy and obvious way to display those binary logs
> from the system when it's not running systemd or from another system
> (IE logs copied from another system).

yep. can you even access journald logs if you're booted up with a rescue
disk? (genuine question, i don't know the answer but figure it's one of
the things i need to know)

craig

--
craig sanders <c...@taz.net.au>
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: How to make systemd more reliable

2016-10-11 Thread Craig Sanders via luv-main
On Tue, Oct 11, 2016 at 09:29:37PM +1100, russ...@coker.com.au wrote:
> On Tuesday, 11 October 2016 8:14:29 PM AEDT Erik Christiansen via luv-main 
> wrote:
> > (Though I'm not sure that systemd's rapacious appetite for
> > monolithic hegemony does a lot more than stultify its own
> > development. In any ecological niche, more agile competitors will
> > tend to gain ascendancy. I look forward to that, and will do what
> > I can to avoid systemd - as I would any unwieldy dinosaur. If that
> > involves avoiding gnome, then that's no loss.)
>
> Did you understand the previous versions of some of the things that
> systemd replaces like ConsoleKit?  Did you avoid them too?

i'd be happier if consolekit and udev and all the rest were separate
from systemd.

i don't particularly care that they're worked on by the same team, but
they should be worked on as separate projects no matter how closely
related they mіght be, or seem to be in their minds.

then alternative compatible implementations (or forks) could be drop-in
replacements.

but the systemd devs' definition of "modular" is "you can turn off some
stuff when you compile" so now we're stuck with only what they think is
important to work on.


i was about to post a sarcastic analogy suggesting that squid and apache
should be combined because they're sort of related, just like systemd
and udev and consolekit etc are sort of related.

but that's actually a good analogy - long ago, we used to have cern
httpd, which did both proxying and web serving. it wasn't particularly
good at either. cern got replaced by ncsa httpd which got patched
into becoming apache (which specialised in web serving, but retained
some proxy capability - mostly used for reverse-proxy use-cases), and
standalone proxies like squid were developed. we now have dozens (at
least) open source web server implementations, and several web proxy
implementations. each one filling a particular niche, or experimenting
with new ideas.

which is the exact opposite of where we're heading with systemd: not a
thriving ecosystem, but a sterile monoculture.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: How to make systemd more reliable

2016-10-11 Thread Craig Sanders via luv-main
On Tue, Oct 11, 2016 at 08:14:29PM +1100, luv-main@luv.asn.au wrote:
> On 01.10.16 01:34, Craig Sanders via luv-main wrote:
> > anyway, systemd's borging of every function it possibly can will
> > inevitably lead to the death of innovation in linux and bring about
> > a software monoculture (nothing will be able to compete with it
> > because in order to do so, a competitor will have to replicate
> > and replace every thing it does, not just be better at one or two
> > things). and moncultures are *always* unhealthy.
> >
> > In short, the price for systemd's one or two nice (but not unique)
> > features is far too high.
> >
> > that's my position on systemd anyway, and it seems not an uncommon
> > one.
>
> +1

-1, actually.

I mostly gave in. I converted two of my home systems to systemd a few days ago
(including my primary desktop/server machine). I'm hoping it's easier to work
around systemd'ѕ bugs and annoyances than to have to deal with the (expected
but unwanted) future of packages having sysvinit support dropped. certainly
less work than converting every new debian system I build to sysvinit or
openrc or something.

I was rebooting anyway in order to replace a failed SSD on one machine and
convert both of them to root on ZFS.  It booted up OK on both, so I made it
the default. If it refrains from sucking badly enough to really piss me off
for a decent length of time, i'll leave it as the default.

At least debian's systemd disables some of the extra crap by default...and
debian's update-grub makes menu entries for both systemd and sysvinit so you
can test it before committing to it on any given machine.

So i've gonve from 3 x sysvinit + 1 systemd to 3 x systemd + 1 sysvinit.

If i could figure out why systemd refuses to see /boot or the swap partition
on the fourth system (my mythtv backend), i'd convert that too...but
journalctl doesn't seem to be accessible when i reboot back into the (100%
working) sysvinit. when i have time to look at it, i'll dump it to a text
file.  I was more concerned with getting it rebooted quickly because a
scheduled recording was coming up.

(this is, of courtse, one of the reasons I dislike journald. logs should be
plain text, so you can access them without specialised tools. and rsyslogd
wasn't running in the semi-broken recovery mode that systemd dumped me in
after making me wait 1m30s until it finished doing mysterious things (AFAICT,
it was doing nothing except twiddling some stars on screen).

This is another systemd annnoyance, everything and anything that goes wrong
during boot (no matter how trivial) is an excuse for it to twiddle stars for
either 90 seconds or 5 minutesinstead of just giving me a shell instantly
so i can fix it.

And there doesn't seem to be any obvious keystroke to tell systemd to stop
with the damn stars and either continue or give me a shell.


> (Though I'm not sure that systemd's rapacious appetite for monolithic
> hegemony does a lot more than stultify its own development. In
> any ecological niche, more agile competitors will tend to gain
> ascendancy. I look forward to that, and will do what I can to avoid
> systemd - as I would any unwieldy dinosaur. If that involves avoiding
> gnome, then that's no loss.)

I don't use gnome either. in fact they're the originators of The Gnome Problem
that systemd has adopted (which is jwz's CADT definition plus a huge dose of
"fuck you, you just don't understand our glorious vision and you're not our
target audience anyway")

I used to use a few gnome apps but they've all been uglified with hard-coded
gnome title-bars and buttons and inscrutable hieroglyph menus etc that they're
hideous on other WMs or DEs like KDE.

I can't tell if that's a deliberate FU to users/devs of other environments or
if they just don't give a damn about them.

Evince was the last gnome app I used, and i've replaced that with qpdview,
epfview, and okular, each of which has its good points and bad points -
e.g. epdfview doesn't do tabs or even multiple windows, qpdfview does tabs
nicely but not multi-window, and okular has excellent render-caching, support
multiple windows, but doesn't do tabs. they're not the only pros and cons but
they're the most obvious. you can sort of make epdfview do multi-window, but
only by starting another instance and then browsing all the way to the file
you want to open.


> And, of course, Vi is also a dinosaur, displaced by Vim with
> "nocompatible" set.

vi's still quite usable. I install plain old nvi (or vim-tiny) on lots of
systems (well, mostly containers and some VMs), it's just vim without the
frills...the really important stuff works the same.

> The coming and passing of systemd will in hindsight be seen as a storm
> in a teacup, I suspect. (Not comparable with the couple of hours after

one can only hope. but probably not. i think it's a one way trip, 

Re: How to make systemd more reliable

2016-10-11 Thread Craig Sanders via luv-main
On Wed, Oct 12, 2016 at 11:18:40AM +1100, russ...@coker.com.au wrote:
> On Wednesday, 12 October 2016 1:31:33 AM AEDT Craig Sanders via luv-main 
> wrote:
> > the only time i've ever seen something similar was my own stupid
> > fault, i rebooted and just pulled out the old SSD forgetting that I
> > had ZIL and L2ARC for the pools on that SSD.  I had to plug the old
> > SSD back in before I could import the pool, so i could remove them
> > from the pool (and add partitions from my shiny new SSDs to replace
> > them).
>
> Did you have to run "zfs import" on it or was it recognised   
> automatically?  If the former how did you do it?  

after plugging the old SSD back in? can't remember for sure, but i think
soit wasn't imported before i rebooted again so wouldn't have been
automatically imported after reboot.

I probably did something like:

zpool import -d /dev/disk/by-id/ 

> Is the initramfs configured to be able to run zfs import?

yes, i have zfs-initramfs installed.


> BTRFS snapshots are working well on the root filesystems of many
> systems I run.  The only systems I run without BTRFS as root are
> systems where getting console access in the event of problems is too
> difficult.

yes, but you can't pipe `btrfs send` to `zfs recv` and expect to get
anything useful. my backup pool is zfs.

and so far, i've had 100% success rate (2/2) with zfs rootfs.

Disclaimer: not a statistically significant sample size. contents
may settle during transport. void where prohibited by law. serving
suggestion only. batteries not included.



> > crucial mx300 275G SSDs(*). slightly more expensive than a pair of
> > 500-ish GB but much better performanceread speeds roughly 4 x
> > SATA SSD read (approximating pci-e SSD speeds), write speeds about 2
> > x SATA SSD.
> >
> > i haven't run bonnie++ on it yet. it's on my todo list.
>
> If you had 2*NVMe devices it would probably give better performance
> than 4*SATA and might be cheaper.  That would also leave more SATA
> slots free.

yes, that would certainly be a LOT faster.  can't see any way it could
be cheaper.  i'd have to get a more expensive brand of ssd plus i'd
need an nvme pci-e card or two.

However, I have SATA ports in abundance.  On the motherboard, I have 6 x
SATA III (4 used for the new SSDs, two previously used for the old SSDs
but now spare) plus another 2 x 1.5Gbs SATA, and some e-sata which i've
never used.  In PCI-e slots, I have 16 x SAS/SATA3 on two IBM 1015 LSI
cards (8 ports in use, 4 spare and connected to hot-swap bays, 4 spare
and unconnected).

PCI-e slots are in very short supply. and my m/b doesn't have any nvme
sockets.

If I could find a reasonably priced PCI-e 8x NVMe card that actuAlly
supported two PCI-e NVMe drives (instead of 1 x pci-e nvme + 1 x sata
nvme), i'd probably have swapped out the spare/unused M1015 cards for
it. i don't have any spare 4x slots.

so i did what I could to maximise performance with the hardware I have.

everything I do on the machine is noticably faster, including compiles
and docker builds etc.

but yeah, eventually I'll move to PCI-e NVME drives. sometime after my
next motherboard & cpu upgrade. 



I'm waiting to see real-world reviews and benchmarks on the upcoming AMD
Zen CPU.

Intel has some very nice (and expensive) high-end CPUs, but their
low-end and mid-range CPUs are more expensive than old AMD CPUs without
offering much improvementmight make sense for a new system, but not
as an upgrade.  Every time I look into switching to Intel, it turns out
I'll have to spend around $1000 to get roughly similar performance to
what I have now with a 6 year old AMD CPU.  I'm not going to spend that
kind of money without a really significant benefit.

I could get an AMD FX-8320 or FX-8350 CPU for under $250 but I'd rather
wait for Zen and get a new motherboard with PCI-e 3.0 and other new
stuff too.  Just going on past history, I'm quite confident that will
be significantly cheaper and better than switching to Intel...i expect
around $400-$500 rather than $800-$1000.


> > we're just on the leading edge of some massive drops in price/GB.
> > a bit earlier than I was predicting, i though we'd start seeing it
> > next year. wont be long before 2 or 4TB SSDs are affordable for home
> > users (you can get 2TB SSDs for around $800 now). and then I can
> > replace some of my HDD pools.
>
> It's really changing things.  For most users 2TB is more than enough
> storage even for torrenting movies.  

btw, for torrenting on ZFS, you need to create a separate dataset with
recordsize=16K (instead of the default 128K) to avoid COW fragmentation.
configure deluge or whatever to download to that and then move the
finished torrent to another filesystem.

probably same or similar for btrfs.

> I think that spinning 

Re: How to make systemd more reliable

2016-10-11 Thread Craig Sanders via luv-main
On Wed, Oct 12, 2016 at 10:42:03AM +1100, Allan Duncan wrote:
> On my todo list.  It happens when I boot after failing to alter fstab
> to match the actual disks connected.

it seems to happen at the slightest excuse, whether the machine ends up
booting or not. just what everyone needs, one or more 90 second delays
during the boot process.

maximising downtime by putting long delays in the way of whoever's
trying to fix the problem is not a good idea.

is this delay configurable to something more reasonable, like 30 or 15
or even 5 seconds? can it be disabled?

can i at least have a --stop-wasting-my-fucking-time option?

(that would be more useful than their deprecated --kill-my-kernel-please,
aka --debug)

> > yep. can you even access journald logs if you're booted up with a
> > rescue disk?
>
> Good question - went and looked: systemctl -b -1 will pull up the log
> from the boot before the rescue boot.
>
> --list-boots will give you the history in the database.

neither of those work. that would be because they're journalctl options,
not systemctl.

also, the default for the Storage setting in /etc/systemd/journald.conf
(at least in debian) is "auto", so no persistent storage of journal
unless you manually create /var/log/journal (which, of course, you
wouldn't do until *after* you realise you need to and only after you've
figured out what needs to be done. bad default)

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: How to make systemd more reliable

2016-10-11 Thread Craig Sanders via luv-main
On Wed, Oct 12, 2016 at 03:35:34PM +1100, russ...@coker.com.au wrote:
> On Wednesday, 12 October 2016 2:46:01 PM AEDT Craig Sanders via luv-main 
> wrote:
> > yes, but you can't pipe `btrfs send` to `zfs recv` and expect to get
> > anything useful. my backup pool is zfs.
>
> In the early days the plan was to have btrfs receive not rely on
> BTRFS, so you could send a snapshot to a non-BTRFS filesystem.  I
> don't know if this is a feature they continued with.

nope. from what i've read, they originally intended to make it tar
compatible but tar couldn't do what they needed, so they dropped that
idea.

> Last time I was buying there wasn't much price difference between SATA
> and NVMe devices.  Usually buying 2 medium size devices is cheaper
> than 4 small devices.

right, but there's a difference between the price of Crucial SSDs and
Samsung or Intel. There's also a price difference between having to buy
a pci-e nvme card and not having to buy one.

> > PCI-e slots are in very short supply. and my m/b doesn't have any
> > nvme sockets.
>
> That's a problem for you then.

well, yes, of course it is. we're talking about my system here, and why
I choose 4 cheap SATA SSDs rather than two pci-e SSDs.


> > i'd still want to buy them in pairs, for RAID-1/RAID-10 (actually,
> > ZFS mirrored pairs)
>
> The failure modes of SSD are quite different to the failure modes
> of spinning media.  I expect it will be some years before there is
> adequate research into how SSDs fail and some more years before
> filesystems develop to work around them.  ZFS and WAFL do some
> interesting things to work around known failure modes of spinning
> media, they won't be as reliable on SSD as they might be because of
> the spinning media optimisation.

I'd still use some kind of raid-1/mirroring anyway, no matter what kind
of drives I had. raid isn't a substitute for backups, but it does reduce
the risk that you'll need to restore from backup (and the downtime and
PITA-factor that goes along with restoring)

also, there's no way for ZFS to correct any detected errors if there's
no redundancy.

i don't mind paying double for storage. it's a bit painful at purchase
time, but that's quickly forgotten. and a lot less painful than the time
and hassle required to restore from backup, and losing everything new or
modified since the previous backup (nightly, but that's still up to a
full day's worth of stuff that could be lost.  Now that i've got rootfs
on ZFS, I can snapshot frequently and backup more often with zfs send)

craig

--
craig sanders <c...@taz.net.au>
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: [luv-announce] Notice of resumption of adjourned Special General Meeting, Tuesday 6 Dec 2016

2016-11-30 Thread Craig Sanders via luv-main
On Wed, Nov 30, 2016 at 03:26:45PM +1100, russ...@coker.com.au wrote:
> On Tuesday, 29 November 2016 1:47:47 PM AEDT Craig Sanders via luv-main wrote:
> > > *That Linux Users of Victoria apply to become a subcommittee of
> > > Linux Australia [...]
> > 
> > what are the arguments for and against?
> 
> IMHO there is no good argument against.

ok, sounds reasonable.

gpg-signed proxy sent to luv-secretary (i can't make it myself, i have a
hospital appt on the 6th).

craig

--
craig sanders <c...@taz.net.au>
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Virtualbox

2016-11-30 Thread Craig Sanders via luv-main
On Wed, Nov 30, 2016 at 03:35:24PM +1100, russ...@coker.com.au wrote:
> As Allan noted DKMS is the ideal solution to that problem.  But it's  
> still a reason for me to avoid it.  I currently have DKMS in place
> for zfsonlinux on some of my systems and don't want the added pain of 
> multiple DKMS installations.  

FYI, 'dkms mkbmdeb' now actually works to create a binary-only package.
It didn't do anything that was actually useful for years, but that was
fixed earlier this year.


Unfortunately, it doesn't add a Provides: line, so you'll need to use
equivs to satisfy dependencies.

[ ... 5 mins later ... ]

actually, it does now. the bug report I submitted about this has been
closed, somebody submitted a patch back in October.  I didn't get an
email telling me though (BTS has always been a bit unreliable about
email notifications in my experience)

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=830670


> Doesn't KVM/Qemu work with MS-DOS and MS-Win?

yep, it does.  

Virtualbox has better graphics support for the VM without messing around
with PCI passthrough and installing a second GPU card, which generally
only matters for games and other things that really need good, fast
graphics rendering.

I've used KVM with a few Win XP & Win 7 VMs.  Works well enough.  If I
wanted to play windows games with KVM, I'd probably install another GPU.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: phone support

2016-12-01 Thread Craig Sanders via luv-main
On Thu, Dec 01, 2016 at 09:43:46PM +1100, russ...@coker.com.au wrote:
> I encourage anyone with Android phones in such situations to give them
> to the LUV hardware library.  Even 5yo Android phones are nice little
> embedded Linux systems that can be used for running your own programs.

they also make nice WIFI VOIP handsets if you're running asterisk or
similar, especially if you've got a charging dock so that they can stand
upright.  Makes a quite decent alarm clock too.

most android phones will work in WIFI-only mode without a SIM card.


I'm still using my old HTC Desire HD, but i'll probably replace it
sometime in the next few years. When i do, i'll keep using it as a WIFI
handset for asterisk until the battery dies.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: /usr/bin/env

2016-12-23 Thread Craig Sanders via luv-main
On Fri, Dec 23, 2016 at 06:02:54PM +1100, russ...@coker.com.au wrote:
> While it is documented to work that way doesn't mean it's a good idea to do 
> it.

the issue isn't about bash and '-e', it's about env breaking the ability to
pass options on the #! line. '-e' is just a trivial illustrative example.

with bash it's only a minor annoyance because most (or maybe all, i can't
remember) options can be enabled with a 'set' command inside the script
anyway.  For other languages, it can break the script entirely or, worse,
change the script's behaviour in subtle and "interesting" ways.


> cd $DIR
> rm -rf *

-e isn't a replacement for defensive programming around potentially dangerous
things, it's just a way to avoid uglifying your code by adding exit-status
checks after every trivial command. an uncaught non-zero exit code will abort
the script.

a saner, or more defensive, way to write that would be:

  cd "$DIR" && rm -rf *

or 

  cd "$DIR" \
&& rm -rf * \
|| exit 1

and it's worthwhile doing that (including quoting the $DIR variable) whether
you use 'bash -e', 'set -e' in the script, or neither.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: /usr/bin/env

2016-12-22 Thread Craig Sanders via luv-main
On Fri, Dec 23, 2016 at 02:44:28PM +1100, Andrew Mather wrote:
> Module files are generally set up by the admins, so they don't require
> anything more from the user than including the appropriate loading
> statements in their scripts.  It's not unlike a wrapper script really.

it sounds similar to (but quite a bit more advanced than) what i've done in
the past with wrapper scripts and collections of environment setting files
sourced (#included) as needed.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


reproducibility of results

2016-12-23 Thread Craig Sanders via luv-main
On Fri, Dec 23, 2016 at 08:22:45PM +1100, russ...@coker.com.au wrote:

> I've heard a lot of scientific computing people talk about a desire to
> reproduce calculations, but I haven't heard them talking about these
> issues so I presume that they haven't got far in this regard.

it was a big issue when i was at unimelb (where i built a HPC cluster
for the chemistry dept and later worked on the nectar research cloud).

depending on the funding source or the journal that papers were
published in, raw data typically had to be stored for at least 7 or 12
years, and the exact same software used to process it also had to be
kept available and runnable (which was an ongoing problem, especially
with some of the commercial software like gaussian...but even open
source stuff is affected by bit-rot and also by CADT-syndrome. we had
a source license for gaussian, but that didn't guarantee that we could
even compile it with newer compilers. it might have changed now, but
iirc it would only compile with a specific intel fortran compiler.
numerous efforts to compile it with gfortran ended in failure)

and some of the data sets that had to be stored were huge - dozens or
hundreds of terabytes or more. and while it wasn't something i worked on
personally, i know that for some of the people working with, e.g., the
synchrotron that that's a relatively piddling quantity of data.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: /usr/bin/env

2016-12-22 Thread Craig Sanders via luv-main
On Fri, Dec 23, 2016 at 12:26:54PM +1100, Sean Crosby wrote:
> > that's one of the things that symlinks are for.
> >
> > e.g. I have python2.6, 2.7, 3.1, 3.2, 3.4, and 3.5 all installed in
> > /usr/bin, with symlinks python & python2 pointing to 2.7, and python3
> > pointing to 3.5
> 
> All well and good if you're root

and if you're not root, you can do the same things in ~/bin or edit
the #! line of your script to point to your preferred interpreter.

if it's not your script and you can't edit it, then either accept that
it's going to be run with a system interpreter or write a wrapper script
in ~/bin to call it with your preferred interpreter.


> Yes but with the software our students use, they repackage python into
> a self contained directory, under the version of the software

they should edit the #! line then. it's not hard, and it avoids making
an unmaintainable mess.

> Hence why /usr/bin/env python is great.

it's not great. it's a mistake arising from inadequate understanding or
knowledge of existing tools and practices.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: /usr/bin/env

2016-12-22 Thread Craig Sanders via luv-main
On Fri, Dec 23, 2016 at 02:35:06PM +1100, Russell Coker wrote:
> Putting the -e in the first line of the shell script is considered bad
> practice anyway.

that's debatable. some think it's bad practice. some think it's using bash as
it's documented to work.

'bash -e' was just a simple example (and it's easy enough to just have 'set
-e' in the script itself), not the beginning or end of the problem.


As mentioned, the problem is even worse if used with other languages. e.g the
following perl script works and produces useful (and expected) output:

#!/usr/bin/perl -p

s/foo/bar/;


This doesn't:

#!/usr/bin/perl

s/foo/bar/;


for those not familiar with perl, `-p` tells perl to wrap the entire script
(aside from a few exclusions like BEGIN and END blocks) in a while/read/print
loop on STDIN+filename args, so the former script actually runs as if the code
is more like this (slightly simplified):

#!/usr/bin/perl

while (<>) {
  s/foo/bar/;
  print "$_";
}

see `man perlrun` for more details. and note that it gets even more
complicated when other options are also used - e.g. see the example given for
'#!/usr/bin/perl -pi.orig'



> If correct operation of the script requires aborting on error then you don't
> want someone debugging it with "bash -x scriptname" to accidentally stop
> that.

as with most things, there are pros and cons to that. sometimes you want -x to
override -e, sometimes you don't. and you can always "override the override"
by running 'bash -x -e scriptname'

the point is that #!/usr/bin/env breaks the documented behaviour of
interpreters, preventing scripts from being run with any options that they may
require.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: reproducibility of results

2016-12-23 Thread Craig Sanders via luv-main
On Fri, Dec 23, 2016 at 09:41:30PM +1100, russ...@coker.com.au wrote:
> Debian/Unstable has a new version of GCC that has deprecated a lot of the 
> older STL interfaces.  It also has a kernel that won't work with the amd64 
> libc from Wheezy.  

yeah, i know. i had to build frankenwheezy (mwahahahah! gasp in horror
at wheezy with libc6 from jessie grafted on) for some docker images a
while back:

  
http://blog.taz.net.au/2016/09/16/frankenwheezy-keeping-wheezy-alive-on-a-container-host-running-libc6-2-24/

AFAICT at the time, the incompatibility was with the later libc6 on
my sid docker host, not necessarily with the kernel...IIRC it started
failing after I upgraded libc6 on the host, without upgrading the kernel
or rebooting.

i don't know if further evidence invalidated my theory. either way,
upgrading wheezy's libc6 to jessie's libc6 solved the problem.


> It should be possible to change the Wheezy libc to the newer amd64
> system call interface without changing much and using kvm or Xen is a 
> possibility too.  

yep, wheezy still runs in a VM. as does etch (i also tried the same kind
of libc6 etc upgrades on etch to get it working in docker on sid but
couldn't get it working. gave up on that and created a VM instead. so
etch-based images will stop working in docker when stretch is released)


> But I can imagine a situation where part of the tool-chain for
> scientific computing had a bug that was only fixed in a new upstream
> release that required a new compiler.

that's one of the advantageѕ of VMs, you can keep old software alive
indefinitelyand that works very nicely with the kind of stuff I was
doing at Nectar with openstack - basic idea was to let researchers start
up VMs or even entire HPC clusters of VMs (e.g a controller-node VM
and a bunch of compute-node VMs, plus the required private networking,
scripting, configuration, etc) as needed for their computational tasks.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: luv-main Digest, Vol 64, Issue 15

2016-12-22 Thread Craig Sanders via luv-main
On Fri, Dec 23, 2016 at 01:06:47PM +1100, Andrew Mather wrote:
> We use the "modules" environment (TACC's lmod implementation specifically)
> for this type of thing.
> 
> https://www.tacc.utexas.edu/research-development/tacc-projects/lmod
> 
> It allows multiple versions of packages to exist without library collisions
> and so on.  Loading the appropriate modules allows the user to set up the
> execution environment and even swap between versions if necessary.

this looks interesting. i'll have to read more about it but at first sight it
seems like a specific language and system for setting up the environment for a
particular program/script.

one of the main reasons i prefer wrapper scripts (or symlinks) is that they
don't rely on undocumented and unknown settings (PATH, LDPATH, etc) in some
random individual's environment. scripts document those settings explicitly.
symlinks just use the standard system environment.

this has the huge benefit of NOT relying on fallible human memory, resulting
in reproducible, auditable, and easily debugged software usage. also avoids
seemingly random breakage from changes to the environment - "why doesn't this
work? it ran perfectly 3 weeks ago when i last ran it."



i'm kind of surprised that a language with the slogan "explicit is better than
implicit" is one of the main perpetrators of the #!/usr/bin/env abomination.


craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: reproducibility of results

2016-12-23 Thread Craig Sanders via luv-main
On Sat, Dec 24, 2016 at 12:51:01AM +1100, russ...@coker.com.au wrote:

> https://github.com/docker/docker/issues/28705
> https://lwn.net/Articles/446528/

thanks, i'll have to read those later today.


> So the kernel command-line option might be the best option for etch.

possibly.  worth a try, anyway.


> Also the Jessie kernel will have security support for another couple
> of years.  There's no reason why you couldn't run a Stretch host with
> a Jessie kernel to support Etch docker images if that was necessary.
> Of course that would make ZFS kernel module support even more exciting
> than it already is...

I'd rather just run an etch VM. Or if i really needed etch in a container
(or multiple containers) for some reason then a jessie VM running docker
or similarthat way i'm only running the old stuff for the things that
actually need it.



> > > But I can imagine a situation where part of the tool-chain for
> > > scientific computing had a bug that was only fixed in a new
> > > upstream release that required a new compiler.
> >
> > that's one of the advantageѕ of VMs, you can keep old software
> > alive indefinitelyand that works very nicely with the kind of
> > stuff I was doing at Nectar with openstack - basic idea was to
> > let researchers start up VMs or even entire HPC clusters of VMs
> > (e.g a controller-node VM and a bunch of compute-node VMs, plus
> > the required private networking, scripting, configuration, etc) as
> > needed for their computational tasks.
>
> That doesn't even solve the problem.

it does for reproducing results from most scientific computing software.
in fact, a VM is about the only way to guarantee the exact same software
environment for old software running on an old OS (you still have to be
careful about the underlying hardware - Intel and AMD, for example, have
slightly different quirks and bugsand Intel has or had a habit of
crippling code compiled with their compilers if they detect at run time that
it's running on a non-Intel CPU)

To reproduce results from an old version of, say, Gaussian (a very popular
commercial computational chemistry program) all i need to do is build and keep
a VM image that runs it. if/when i ever need to, i just fire up the VM.

How is that not solving the problem?

The requirement is to reproduce the same results from the same data using
the same program (bugs and all), not to get the old software running on a
newer, updated linux distro or to re-process the old data with a new improved
version(*). What the old version runs on is pretty much irrelevant, as long as
it runs and produces exactly the same results. the point of the exercise is
academic integrity, being able to prove that you didn't make up your results.

if you can't generate exactly the same results, then that's a problem - even
if the new results are better or more accurate.

(*) If updated, bug-fixed results are required then that's a complete separate
issue - and material for a new paper or at a separately published correction.




BTW, VMs also solve the problem for other desktop apps that only run in an old
version of your distro. ditto for windows apps. For example, i've got a Win
XP app(**) that won't run in Win 7.  It will run partially in WINE (almost
everything works except print and export), but runs fine in a Win XP VM.

(**) GURPS GURU, a GURPS RPG character generator - merely freeware, not Free
Software so source code is not available...i haven't even been able to track
down the author's current contact details, he seems to have vanished off the
net sometime in the mid-2000s)


> Wheezy lost support for Chromium due to compiler issues.  Kmail in
> Jessie never worked correctly for large mailboxes.  So for me a basic
> workstation (lots of mail and web browsing) wasn't functional on
> either of those releases.  I ended up running Jessie and then Unstable
> with a Wheezy image under systemd- nspawn for email which meant that I
> couldn't click on a link to launch a browser (not without more hackery
> than I had time for).

Scientific computing has little or nothing to do with chromium, other
browsers, or most other desktop stuff. If there's a GUI at all, it's usually
just some kind of front-end app to generate control or data files for
text-mode programs (often run on multiple nodes of a cluster), and/or to
visualise or post-process the results.

> It's easy to imagine a similar chain of events breaking a scientific
> workflow.

i think you don't know what a scientific computing workflow actually is. it's
not like desktop app stuff (they have macs or windows or linux PCs for that),
or even like systems geekery. it's a different kind of usage entirely.

scientific computing jobs tend to be batch jobs, not interactive. and can take
anywhere from minutes to hours, days, months, sometimes even years to run to
completion even on large clusters (which are shared resources running other
jobs for you and for other people at the same too).  You submit the job 

Re: reproducibility of results

2016-12-25 Thread Craig Sanders via luv-main
On Sun, Dec 25, 2016 at 04:56:05PM +1100, Paul van den Bergen wrote:
> Funny, I was asked about exactly the same problem when I started
> @WEHI... only there was no attempt made to even start tackling the
> problem...

yeah, we were constantly getting individual academics and research
groups asking us about storage, and then trying to do the best we could
with minimal resources.

the unfortunate fact is that disks/storage arrays and file-servers
and tape libraries etc are expensive. You can replace a very large
percentage of your up-front capital expense with skilled techs, which
are an on-going cost (you're going to need them to look after expensive
equipment anyway, and it has to be maintained & upgraded for 7+ years),
but it's still going to cost a lot for huge data storage anyway, even if
you avoid over-priced name-brand gear.

> Virtualisation of workload makes the problem a lot easier to tackle,
> but even so... 7 years is a long time in IT...

cheap big disks helps a lot too. but you need a lot of them, plus
backup - on-site and off-site.

CPU & RAM are more than adequate for pretty nearly any file-storage
needs these days...could always use more of both for computational
stuff.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: /usr/bin/env

2016-12-22 Thread Craig Sanders via luv-main
On Fri, Dec 23, 2016 at 08:11:15AM +1100, Sean Crosby wrote:
> I've taken to using /usr/bin/env a bit more because of the max length
> limit in shebang lines. We store newer versions of Ruby, Python etc
> on a separate filesystem, where there are many versions of these
> directories, and they are hidden down quite far in the dirtree. So we
> regularly hit the max shebang length limit of 128 characters.

that's one of the things that symlinks are for.

e.g. I have python2.6, 2.7, 3.1, 3.2, 3.4, and 3.5 all installed in
/usr/bin, with symlinks python & python2 pointing to 2.7, and python3
pointing to 3.5

that's all managed automatically by the system python packages.

if i ever need a custom compiled python 3.x or whatever, I can either
make a package the same way or install it under /usr/local and create
and/or update the symlink as needed.

or i can compile and install it anywhere and make a specific symlink
(e.g. /usr/local/bin/python.custom) pointing to it - avoiding the 128
character #! limit.


python scripts have either a specific versioned binary name in the #!
line or just #!/usr/bin/python or #!/usr/bin/python2 for the latest
python 2.x or #!/usr/bin/python3 for the latest python 3.x. at some
point in the future, python3 will become the default python and
/usr/bin/python will point to it.

and the scripts work exactly the same, using the exact same interpreter
(with the exact same set of library modules) no matter who runs them or
what environment they're run in (e.g. from a shell, or from cron, or a
web server).  Consistency and predictability are important.  As is
manual control/override where needed.


similarly, I have ruby1.9.1, ruby2.0, ruby2.1, ruby2.2, and ruby2.3 in
/usr/bin, with /usr/bin/ruby a symlink pointing to ruby2.3

craig

ps: to me, using #!/usr/bin/env is just a variant of something that i've
hated ever since my first unix sysadmin job (actually, before that when
I was just a user or programmer) - important things should not be buried
in a programmer's home directory and dependent on their idiosyncratic
(and undocumented) environment settings. that's fine for your own tools
and hacks and dev/testing versions, but when any such program moves
beyond being a personal tool, it needs to be integrated into the system
so that it works consistently for everyone who uses it.


--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


/usr/bin/env

2016-12-22 Thread Craig Sanders via luv-main
On Fri, Dec 23, 2016 at 12:57:48AM +1100, Andrew McGlashan wrote:
> #!/usr/bin/env bash

please don't promote thet obnoxious brain-damage. it's bad enough seeing the
#!/usr/bin/env disease on sites like stackexchange (where at least they have
the excuse of catering to non-linux systems - and even there it's broken,
because env isn't guaranteed to be in /usr/bin on all systems anyway) but bash
will be /bin/bash on every linux system that exists, and always will be.

there are lots of good reasons why abusing /usr/bin/env like this is a bad
idea at:

http://unix.stackexchange.com/questions/29608/why-is-it-better-to-use-usr-bin-env-name-instead-of-path-to-name-as-my

(see also the Linked and Related Q on the RHS of the page)

the one argument in favour of doing this (that the script will be run by the
first matching interpreter found in the PATH) is both a blessing and a curse.
at best it's a minor convenience. at worst, it's a potential security risk -
it's not an accident or an oversight that every unix system since the #! line
was invented DOESN'T search $PATH for the interpreter.


one of the worst problems with doing it is that it breaks the ability to pass
command-line options to the interpreter in the #! line - e.g. '#!/bin/bash -e'
works, but with '#!/usr/bin/env bash -e' the '-e' is ignored by bash.

this is bad enough for bash, but worse for other scripting languages where
passing command-line options to the #! interpreter is routine (like sed,
awk, perl) or required (like make, which requires '-f' on the #! line of an
executable make script).

also, env messes with ARGV[0] which can make the script difficult or
impossible to find with ps


'#!/usr/bin/env interpreter' - brought to you by the people who think
that 'curl http://randomwebsite/path/to/script | sudo bash' is a good
way to install software.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Every 2 Minutes cronjob

2016-12-22 Thread Craig Sanders via luv-main
On Thu, Dec 22, 2016 at 07:13:21PM +1100, David Zuccaro wrote:
> Yes it's a screen capture script.
> 
> Am I setting DISPLAY properly?

As Morrie said, there's a lot more to it than just setting the DISPLAY
variable.

This doesn't seem like a good task for cron.  IMO you'd be better off
with screen recording software - that's precisely the job they're designed for.

I don't use any myself, never needed/wanted to, so can't recommend any
but these web pages list several:

https://community.linuxmint.com/tutorial/view/1229
http://hackerspace.kinja.com/screen-recording-in-linux-1686055808
http://www.tecmint.com/best-linux-screen-recorders-for-desktop-screen-recording/
http://askubuntu.com/questions/4428/how-to-record-my-screen


All these and more were found with a quick google search:

  https://www.google.com.au/search?q=linux+software+to+record+desktop+session

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: /usr/bin/env

2016-12-22 Thread Craig Sanders via luv-main
On Fri, Dec 23, 2016 at 01:37:11AM +1100, Craig Sanders wrote:
> one of the worst problems with doing it is that it breaks the ability
> to pass command-line options to the interpreter in the #! line - e.g.
> '#!/bin/bash -e' works, but with '#!/usr/bin/env bash -e' the '-e' is
> ignored by bash.

that's not quite true. it's not that bash ignores the '-e', it's that
env tries to run a non-existent program called 'bash -e'

e.g.

$ cat foo.bash
#!/usr/bin/env bash -e

echo foo

$ ./foo.bash
/usr/bin/env: ‘bash -e’: No such file or directory


IMO, that's an unmistakable signal that env was not intended to be used
in this way.



IIRC some versions of env (not the one in GNU coreutils, which is
installed on almost every linux system - with some embedded or
busybox/tinybox systems being the exceptions) will still run the correct
interpreter but fails to pass on any options.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Viewing iView and flash

2017-03-26 Thread Craig Sanders via luv-main
On Sat, Mar 25, 2017 at 06:23:05PM +1100, russ...@coker.com.au wrote:

> deb http://www.coker.com.au wheezy misc
> deb http://www.coker.com.au jessie misc
> deb http://www.coker.com.au stretch misc
> 
> The above APT repositories have Python-iview built for Debian, it was built 
> for wheezy but still works.  I probably should update it.

I have a set of shell scripts which make use of python-iview that
you might want to adapt for inclusion in the package. they need some
minor changes to make them suitably generic (they currently make some
assumptions about the index file location, video download location, and
ownership etc that aren't appropriate for packaged software)

-rwxr-xr-x 1 cas cas41 Mar 31  2015 iview-extract-filename.sh*
-rwxr-xr-x 1 cas cas   586 Jun 18  2016 iview-fetch-index.sh*
-rwxr-xr-x 1 cas cas   176 Mar 29  2015 iview-grep.sh*
-rwxr-xr-x 1 cas cas  1080 Jun 18  2016 iview-list.sh*


iview-extract-filename.sh is a trivial sed wrapper that extracts the
filename of a show from the not-quite-right format used in the index so
that it can be downloaded with iview-cli.

iview-fetch-index.sh fetches the current iview index listing if it
either doesn't exist or if the local copy is too old, with an option to
force a refresh.

iview-grep.sh searches the downloaded index with grep. it calls
iview-fetch-index.sh to fetch/refresh the index as requireḋ.

iview-list.sh builds on iview-grep.sh by searching for all episodes
that match the pattern, and then optionally downloading them using
python-iview's iview-cli.



They could all be fairly easily merged into a single script, with a
small number of command-line options to perform the various tasks - all
except iview-extract-filename.sh already use getopts and are really just
variations on a theme anyway, and iview-extract-filename.sh is something
that would work just as well as a function in that single script.

Doing that has been on my TODO list for some time, but i rarely have
to resort to iview so it's a low priority for me (i just set mythtv to
record everything that sounds even vaguely interesting, and often don't
even get around to watching it anyway so end up deleting them when i get
close to running out of space)

Given what's being done to the ABC lately (i.e. being run down and
crappified so that nobody cares too much when it's sold off to Murdoch),
i doubt i'll even bother with my myth setup for much longer. Commercial
TV isn't worth watching even if you skip or strip out the ads, and what
they're turning the ABC into isn't going to be worth watching either.
SBS has some good stuff, but that's being screwed over by the LNP too.


> > I will also look for a way to let the appropriate people at the ABC
> > know that flash is inappropriate for many reasons, particularly the
> > security vulnerabilities.

> But not using Flash would allow naughty people to use a python program
> to download content! :-#

Using flash doesn't stop anyone from doing that. python-iview
still works and still occasionally gets updated to cope with ABC
obstructionism (by nice people on the net, not by the original author JV
who has been cease-and-desisted out of his own software).

Many linux video players will play flash video files (.flv), without
using or requiring flash - including mplayer and derivatives like
smplayer. and the video in a flash video file is easily converted to a
standard video format, with no loss of quality (i use ffmpeg. handbrake
should work too).

Of course, the resolution on iview sucks compared even to SD, but it's
OK if you missed the first episode or two of a series before telling
MythTV to start recording it.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: simple web proxy

2017-03-22 Thread Craig Sanders via luv-main
On Thu, Mar 23, 2017 at 02:07:33AM +1100, russ...@coker.com.au wrote:
> Can anyone recommend a very simple web proxy that works reliably with apt-get 
> (unlike Squid)?  What I want is to just allow access to apt repositories and 
> nothing else.  Caching isn't really required as the bandwidth available has 
> been steadily increasing rapidly while Debian package size has been 
> increasing 
> very slowly.

have you tried apt-cacher-ng?

if you don't need much cache, just set the cache limit low.


Package: apt-cacher-ng
Version: 2-1
Description-en: caching proxy server for software repositories
 Apt-Cacher NG is a caching proxy for downloading packages from Debian-style
 software repositories (or possibly from other types).
 .
 The main principle is that a central machine hosts the proxy for a local
 network, and clients configure their APT setup to download through it.
 Apt-Cacher NG keeps a copy of all useful data that passes through it, and when
 a similar request is made, the cached copy of the data is delivered without
 being re-downloaded.
 .
 Apt-Cacher NG has been designed from scratch as a replacement for
 apt-cacher, but with a focus on maximizing throughput with low system
 resource requirements. It can also be used as replacement for apt-proxy and
 approx with no need to modify clients' sources.list files.


craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: DNS resolution weirdness whilst using OpenConnect

2017-03-09 Thread Craig Sanders via luv-main
On Mon, Mar 06, 2017 at 01:23:05PM +1100, Anthony wrote:
> What goes screwy is DNS resolution...
>
> Sometimes, for no obvious reason, I can resolve internal hostnames
> that resolve to destinations reached by the host using things like the
> "host" command...

IMO the best solution is to run your own DNS resolver (e.g. with unbound
or maradns or whatever on your gateway box), manually set resolv.conf
to point to it, purge crapware like resolvconf, and disable resolv.conf
mangling by anything capable of doing it (e.g. dhclient, network
manager, openconnect, etc).

In short set, disable anything that auto-magically fucks up your DNS
resolver settings.


e.g. resolv.conf on my resolver host looks like this:

search taz.net.au
nameserver 127.0.0.1

on machines with a static IP, it looks like this:

search taz.net.au
nameserver 203.16.167.1


If you run a DHCP server, configure it to give out your domain name and
your resolver's IP address.  My dhcp server (ISC DHCPD) has these rules:

option domain-name "taz.net.au";
option domain-name-servers 203.16.167.1;

If your machines rely on someone else's DHCP server (e.g. a laptop you
plug into many different networks) you can still run your own resolver.
Just edit your /etc/dhcp/dhclient.conf and remove "domain-name" and
"domain-name-servers" from the "request" line. or use "supercede",
"prepend" etc rules in dhclient.conf to make sure your resolver on
127.0.0.1 is the first or only resolver.  Basic examples are commented
out in the .conf file, or see dhclient.conf's manpage for full details.




Aside from fixing your DNS resolution weirdness, this will also have the
effect of speeding up DNS resolution as you now have a local caching
resolver on your LAN - eliminiating 10s or 100s of milliseconds RTT for
DNS lookups.  It's worth doing it for this alone, even if your DNS isn't
being randomly screwed up by competing automagic crap.


Running your own resolver is easy, it's a one time operation, then you
can forget about it - little or no maintainence is required.

Some resolvers allow you to set upstream forwarders (e.g. your ISP's
ns, or google's 8.8.8.8 or whatever).  Some allow you to set specific
upstream forwarders for specific domains - this is especially useful
if you have a VPN to work or somewhere, and need to be able to resolve
hostnames in private domains or sub-domains that are only visible behind
the company firewall.  This kind of configuration flexibility is not
posssible unless you run your own resolver.

craig

ps: also recommended, a local squid cache. with ad & script blocking
rules to provide a minimal set of filtering even without browser plugins
like umatrix or ublock origin.  Unfortunately, this is less useful than
it used to be - using https everywhere is a great thing, but it busts
caching.

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: rsync; how to preserve Windows permissions

2017-03-09 Thread Craig Sanders via luv-main
On Fri, Mar 03, 2017 at 02:48:56PM +1100, Bill Yang wrote:

> I now have another issue. Some of the files/directories I transferred
> from source storage server (Red Hat) are Windows files/directories
> (Those Windows files were backed up using a backup software). rsync
> didn’t preserve Windows permissions/ACLs when those files were
> transferred onto the target storage server (FreeBSD). Is there any
> way to instruct rsync to preserve Windows permissions/ACLs? Any
> suggestions would be appreciated.

I have no idea if this will work for windows ACLs or not (or from linux
to freebsd filesystems), but did you try rsync's -A (aka --acls) option?


Note that ACL support is very much dependant on filesystem capabilities, and
different filesystems may have completely different and incompatible ACL 
support.

AFAICT, you have at least two problems:

1. your windows backup software backs up ACLs in some undefined way.  You need
to research how and what it is doing.

Can your windows backup software store the ACLs in a separate file and
restore the correct permissions from that?

2. you are rsyncing from a linux filesystem to freebsd. it may not even
be possible to get this to work, especially considering that the Windows
ACLs are, at best, a translation from real windows ACLs to the best fit
on a linux filesystem, and then (with luck) translated again by the rsync
transfer to the best match on a freebsd filesystem.


craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Office suite functionality without an office suite?

2017-07-01 Thread Craig Sanders via luv-main
I somehow missed this message when you first posted it.

here's what I use, or have used in the past.  NOTE: my needs for "Office" type
programs are quite simple and minimalist - easily met by even basic software.
It sounds like your needs are similar.

On Tue, Jun 06, 2017 at 11:13:27PM +, Dede Lamb wrote:

> I could use md + pandoc to produce text documents which takes care of my
> main use case for office.

yep, good idea.  I use markdown + pandoc for almost all documentation and
writing these days.  I always used to write my docs in vim and then do the
markup and formatting in libreoffice or abiword in the past (IMO writing and
markup are two completely separate things) so doing it this way was not a big
change. vim's syntax highlighting works quite nicely with markdown too.

If I need more than what markdown can do i use pandoc to convert to ODF and
finish the document with libreoffice.  (This is usually complicated tables
- really the only thing it can't do that i occasionally need is tables with
multiple headers...not multi-line headers, but multiple levels of headers
- e.g. a top-level with 2 or more columns, each of which has two or more
sub-columns.)

if i was less lazy, or needed to write complex documents more often, I'd make
the effort to learn TeXI can do simple things in it easily enough, but
real mastery of it requires more effort and time than i'm willing to put in.

markdown's got the "80% good-enough" seal of approval :)


> The only other thing I use office for is the occasional spreadsheet
> manipulation (auto-filling and basic math functions, tinkering with sums
> etc).  I'm not sure of a suitable stand-in for this.

Gnumeric is a fairly light-weight spreadsheet - lighter than libreoffice,
anyway.  It's not as capable as localc but it will do all the things you
mentioned and more.

I used to use it whenever I needed to do spreadsheety stuff (which isn't all
that often).  I mostly use libreoffice calc now, largely because I've spent
enough time fixing, rewriting, and improving "macros" (i.e. functions and subs
in LO's version of Basic) in spreadsheets I've downloaded or that people have
sent to me that I know the language reasonably well now.

> Thoughts? No idea too crazy, bonus points if it works in a console, minus
> points for cloud services ;)

I used to use sc occasionally years ago.  No idea if it's still worth using.

Package: sc
Source: sc (7.16-4)
Version: 7.16-4+b2
Installed-Size: 440
Maintainer: Adam Majer 
Architecture: amd64
Depends: libc6 (>= 2.14), libncurses5 (>= 6), libtinfo5 (>= 6)
Description-en: Text-based spreadsheet with VI-like keybindings
 "Spreadsheet Calculator" is a much modified version of the public-
 domain spread sheet sc, which was posted to Usenet several years ago
 by Mark Weiser as vc, originally by James Gosling. It is based on
 rectangular table much like a financial spreadsheet.
 .
 Its keybindings are familiar to users of 'vi', and it has most
 features that a pure spreadsheet would, but lacks things like
 graphing and saving in foreign formats.  It's very stable and quite
 easy to use once you've put a little effort into learning it.
Description-md5: 0925a794779dba23662eeb41fb663c7e
Tag: office::spreadsheet, role::program, scope::application,
 uitoolkit::ncurses, use::editing, works-with::spreadsheet
Section: math
Priority: optional
Filename: pool/main/s/sc/sc_7.16-4+b2_amd64.deb
Size: 211774
MD5sum: 94c7293bbb4ed7858f861d0e1bc3dfe5
SHA256: 1a676b93a1e376f18f8efc30e574c1b65b84be12157289d4105850810f2804e5

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Office suite functionality without an office suite?

2017-07-02 Thread Craig Sanders via luv-main
On Sun, Jul 02, 2017 at 09:47:08AM +, Dede Lamb wrote:
> Yes! This is all good info. In the meantime I've been doing some experiments
> of my own. Python interpreter has replaced trivial math stuff that I used to
> do in spreadsheets.

For "trivial" math stuff, there's also bc (or dc if you want RPN) - but both
are capable of a lot more than just trivial arithmetic, although i tend to use
awk or perl for complex stuff (i'd rather spend time improving my skill with
generic languages than math-only languages).  You may also be interested in
GNU octave.

I use bc a lot, as well as speedcrunch if i need a semi-programmable GUI
calculator with good history (of both input and results). I find simple calc
apps like gcalculator to be annoyingly difficult to use **because** they work
like a basic hand-held calculator so you can't see (and edit) the entire
calculation you're entering before you execute it.


> When I need to produce documents I write them in markdown, convert them to
> html using pandoc and then convert that to pdf using wkhtmltopdf. I style
> the html document using css which is the easiest way i know of to apply
> styles, very little code is required to get something fairly snappy looking.

You can do that in one step if you want. pandoc knows how to use wkhtmltopdf -
it's one of the two options for generating PDF (the other is via latex).

e.g from a Makefile in one of my document directories:

$(BOOKNAME).pdf: $(TITLE) $(CHAPTERS) $(CSS) Makefile
pandoc -r markdown -w pdf -t html5 --css $(CSS) -o $@ $(TITLE) 
$(CHAPTERS)

The '-w pdf' combined with '-t html5' tells pandoc to generate a PDF by first
generating html5 and then processing it with wkhtmltopdf.

The same Makefile also has similar rules for producing epub and html output.
Multiple output formats from the same source file(s).  The epub and html rules
use slightly different CSS files.

wkhtmltopdf has some oddities and bugs but is mostly good enough for most
things. getting page breaks exactly where you want them can be a real PITA.
OTOH, I do know HTML & CSS fairly well, so it's easier for me to work with
than latex.


BTW, one of the things I **really** like about using plain text formats
for writing is that it works perfectly with git, so I get to use the exact
same revision control tools for documentation and other writing as I do for
programming. ".md" and ".css" files are source code, just like ".c" or ".pl"
or whatever.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Office suite functionality without an office suite?

2017-07-01 Thread Craig Sanders via luv-main
On Sun, Jul 02, 2017 at 01:50:58PM +1000, Andrew Pam wrote:
> On 02/07/17 13:21, Craig Sanders via luv-main wrote:
> > if i was less lazy, or needed to write complex documents more often,
> > I'd make the effort to learn TeXI can do simple things in it easily
> > enough, but real mastery of it requires more effort and time than i'm
> > willing to put in.
>
> I recommend LyX as an excellent graphical front-end for TeX, and it fully
> supports embedding raw TeX wherever you want to.

LyX is great for people who like graphical front-ends. I don't, I can't stand
them...may as well use a GUI word processor.

IMO vi (any vi, preferably vim) is the best tool for editing any kind of text,
with markup or without.



My only use for tools like LyX is to do things that require more knowledge of
TeX than I haveand even then just to get a chunk of sample TeX code that I
can modify/re-use in vim.  Using the evil power of cargo-culting as a learning
tool :)

I'm very uncomfortable with using tools that magically do things for me that
I don't understand well enough to do myself.  I see that as a problem to be
fixed, ASAP.

Once I know how to do something in a markup language like markdown or TeX,
if I need to do it repeatedly I'm inclined to write scripts (usually just
awk or perl or sed one-liners) to generate the code - e.g. convert some tab-
or comma- separated data into a table.  For really simple things, vi/vim's
capability for defining editing macros and even ':map' are enough - especially
when I need to edit large text files, performing the same operations in lots
of places but need to exercise human judgement (e.g. where a global search and
replace will do at least as much harm as good).

The end result is that I have very patchwork knowledge of TeX - most simple
things and some complex things that i needed to do at least once before, but
with very large knowledge gaps.  Markdown is much simpler, so I have nearly
complete knowledge of that, and it's adequate for most of what I need to do.

craig

--
craig sanders <c...@taz.net.au>
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: loudness of mp4 etc files

2017-06-28 Thread Craig Sanders via luv-main
On Wed, Jun 28, 2017 at 08:51:08AM +1000, zlin...@virginbroadband.com.au wrote:
> For any music I download, I convert it to an ogg file with Audacity
> but first use Effect>Amplify to normalise the gain, in order to get a
> reasonably close range of loudness. I use Audacity rather than one of
> the converter programs particularly for its Amplify command.

See also python-rgain:

Package: python-rgain
Source: rgain
Version: 1.3.4-1
Installed-Size: 93
Maintainer: Debian Python Modules Team 

Architecture: all
Depends: python-gi, gir1.2-gstreamer-1.0, python-mutagen, 
gstreamer1.0-plugins-base, gstreamer1.0-plugins-good, python (>= 2.7), python 
(<< 2.8), python:any (>= 2.6.6-7~)
Recommends: gstreamer1.0-plugins-ugly | gstreamer1.0-libav
Description-en: Replay Gain volume normalization Python tools
 This package provides a Python package to calculate the Replay Gain values of
 audio files and normalize the volume of those files according to the values.
 Two basic scripts exploiting these capabilities are shipped as well.
 .
 Replay Gain is a proposed standard designed to solve the very problem of
 varying volumes across audio files. Its specifications are available at
 http://replaygain.org/ .
Description-md5: 48f1f68a3520e4a1beab32f395c121b8
Homepage: https://bitbucket.org/fk/rgain/
Section: python
Priority: optional
Filename: pool/main/r/rgain/python-rgain_1.3.4-1_all.deb
Size: 25982
MD5sum: 86fea888aca88affb359b6564477cc4d
SHA256: 1cd58c4525d0b70a6c358c655f8e82573a30a2fbbce2d7070028f8b26d4decca


I haven't used this, so don't know how well it works.  The description sounds
useful and relevant.

It contains two scripts:

1. replaygain - read, calculate, and write Replay Gain for 1 or more
   files. works for several formats: ogg, flac, wavpack, mp4, mp3

2. collectiongain - does the same for an entire collection of music,
   just give it a path rather than filenames.



Also, here's a list of packages in debian that mention replay gain
somewhere in the package info:


$ apt-cache search replay.?gain  | awk -F' - ' '{printf "%-25s %s\n", $1, $2}'
bs1770gainmeasure and adjust audio and video sound loudness
crip  terminal-based ripper/encoder/tagger tool
groovebasin   music player server with a web-based user interface
libebur128-1  implementation of the EBU R128 loudness standard
libebur128-devimplementation of the EBU R128 loudness standard 
(development files)
libmpcdec-dev MusePack decoder
libreplaygain-dev Calculate ReplayGain information
libreplaygain1Calculate ReplayGain information
rhythmbox-plugins plugins for rhythmbox music player
soundkonverteraudio converter frontend for KDE
vorbisgainadd Replay Gain volume tags to Ogg Vorbis files
bluemindo ergonomic and modern music player designed for 
audiophiles
aacgain   Lossless mp4 normalizer with statistical analysis
libgrooveloudness4loudness scanner for libgroove
libgrooveloudness-dev loudness scanner sink for libgroove (development 
files)
qmmp  Feature-rich audio player with support of many formats
python-rgain  Replay Gain volume normalization Python tools






PS: as is fairly common with python (and gem and ruby and node.js etc)
programs, the web page for rgain gives instructions to first install the
dependencies with apt-get (unneccessary but not actively harmful), and then
cretinously tells you to run an installer script for rgain as root.

Don't do that, it's extremely bad advice from programmers who don't care about
the operating systems their code is running on, who see the OS as an obstacle
to be worked around and trashed rather something to use and work with.  Even
without any malicious surprises in the installer script, that kind of advice
breaks systems.

Worse, it's an incredibly bad habit to get into especially for people who
actually need instructions like that (they're exactly the people who should
NEVER be encouraged to follow unsafe, insecure instructions).  Ditto for
instructions that do things like tell you pipe the output of wget or curl into
a root sh or bash or whatever.

Instead, just run 'apt-get install python-rgain'.  All the dependencies will
be resolved automatically by apt-get.  That's its job.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Auto-remove

2017-08-22 Thread Craig Sanders via luv-main
On Tue, Aug 22, 2017 at 02:49:34PM +1000, russ...@coker.com.au wrote:
> On Tuesday, 22 August 2017 4:06:37 AM AEST stripes theotoky via luv-main
> wrote:
>
> > I use aptitude as a package manager.  I'm running out of disk space.
>
> How much disk space is in use and how much do you have?  Hard drives keep
> getting bigger, nowadays it's hard to give away disks smaller than 500G.  A
> large Debian installation is around 6G.

It could be apt-get's download cache taking up a lot of disk space.  It
doesn't clear out downloaded files unless you tell it to.  AFAICT from the man
page, the same is true for aptitude - not surprising, they both use the same
download cache dir to download .deb files to.

try 'du -sch /var/cache/apt/archives'

and if there's a lot of files in there, run 'apt-get clean'

'aptitude clean' will also work.

> Well if you remove all kernels you are probably going to have a problem.
> But if you remove all but the most recent then it will probably be ok.
> Which is it doing?

it's safe to remove all linux-image-* and linux-header-* packages except
for the currently running kernel, which may or may not be the latest kernel
package installed (depending on whether you've rebooted or not since upgrading
it).

apt-get (actually, dpkg IIRC) will warn you if you try to uninstall the
currently running kernel. If your running kernel was auto-installed due
to a dependancy, mark it as manually installed with 'apt-mark manual
linux-image-VERSION', so that it doesn't get removed if you run 'apt-get
autoremove'

> I'm running Debian/Unstable on my laptop and due to some issues of
> dependencies etc "apt-get autoremove" wants to remove many KDE packages
> right now which isn't what I want.  Also due to conflicts it wants to remove
> them if I run "apt-get dist-upgrade".  This sort of thing sometimes happens
> in Unstable when libraries are being updated, so I just have to not upgrade
> my laptop until all the necessary packages are rebuilt to depend on new
> libraries.  It's the sort of thing that happens when you run Unstable.

apt-get upgrade is useful in that situation - it only upgrades packages
that WON'T require another package to be removed.

marking packages as held is also useful.  I used to use my own script
'dpkg-hold' for this but 'apt-mark' (which didn't exist when i write
dpkg-hold) works better.



craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: command line XMPP presence monitoring

2017-05-24 Thread Craig Sanders via luv-main
On Tue, May 23, 2017 at 10:09:05PM +1000, Russell Coker wrote:
> I am using sendxmpp to send notifications of system errors.  That requires
> the XMPP client keep running.  Xabber on Android sometimes stops for no
> apparent reason (I have it configured to always have a notification so it
> should never stop).
>
> Is there a command-line program to check presence on XMPP so that I could
> have it notify me by some other method if it detects that I haven't been on
> for a while?

Dunno about an existing command line tool but it should be pretty simple
to write one in perl or python or something.

for python in debian, there is:

python-jabberbot - easily write simple Jabber bots
python-nbxmpp - Non blocking Jabber/XMPP Python library
python-xmpp - Python library for communication with XMPP (Jabber) servers
python-pyxmpp - XMPP and Jabber implementation for Python
python-sleekxmpp - Python XMPP (Jabber) Library Implementing Everything as a 
Plugin
python3-slixmpp - Threadless, event-based XMPP Python 3 library
python3-slixmpp-lib - Threadless, event-based XMPP Python 3 library (optional 
binary module)

slixmpp seems to be python3 only. the others have python & python3
versions.

And for perl, there is:

libanyevent-xmpp-perl - implementation of the XMPP Protocol
libnet-xmpp-perl - XMPP Perl library

BTW, sendxmpp is a perl script that uses libnet-xmpp-perl (aka
Net::XMPP) so you already have that installed.  It shouldn't be too hard
to hack up a modified version of sendxmpp that checks if the recipient
account has been logged in recently before sending the message. maybe
make it a generic subroutine (e.g. recipient(s), max seconds since last
seen, and message as args) and submit it as a patch upstream.  Looking
at the man pages in the package, Net::XMPP::Presence will probably have
what you need.


or you could do it the crude/slack/easy way and just send an email every
time as well as attempting a jabber message. and just in case email is
down, use curl or lynx or something to fetch a specific URL (CGI script)
from some other server you control.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: IDE & Tasks Lists ~ Cross Platform

2017-05-24 Thread Craig Sanders via luv-main
I just saw Erik's reply to this, hadn't noticed your original post until
now.

On Fri, May 19, 2017 at 07:22:20AM +1000, Piers Rowan wrote:

> We have grown quite a bit and having each dev running their pet dev
> environment seems eclectic and difficult to manage (aka manage down
> when you need to help a colleague and it take you 5 minutes to work
> out how their IDE / screen is working).

Іf you want an absolutely certain way to totally destroy a developer's
productivity, force them to use your (or someone else's) preferred IDE.
or worse, an IDE chosen by committee/consensus.

As long as they're committing their changes regularly to the version
control system, and following the required coding style, let each of
them use what they want. Otherwise you're forcing them to throw away
years of experience with their tools.  Also, they're almost certainly
going to keep using their own preferred tools at home, so they'll suffer
a jarring context switch every day when they come in to work.

Nominate a preferred or supported IDE if you want but don't force
everyone to use it.



If you're using git for version control (and why wouldn't you?),
gitlab is a pretty good tool for running your own github-like source
repository, issue/task tracker, developer wiki, built-in CI tools (e.g.
you can configure it to automatically try to compile the software on
every commit and report the outcome to the developer), and more.

https://about.gitlab.com/

Packaged for debian and presumably other distros. and trivially easy
to get a tєst site up and running if you have a docker host somewhere
(docker image at https://hub.docker.com/r/gitlab/gitlab-ce/).

gitlab has a "community edition" and an "enterprise edition".
AFAICT, there isn't much difference between the two, mostly paid
support and some niceties useful mainly for very large sites:
https://about.gitlab.com/comparison/gitlab-ce-vs-gitlab-ee.html



I run gitlab on my own server to manage my own software projects (many
of which are also on github), as well as for documentation and other
writing projects larger than a half-page readme (i'm a huge fan of vim
+ markdown + pandoc + a Makefile + git for a great writing workflow:
markdown is the "source code", and html, .odt, .epub and .pdf are
the pandoc-"compiled" output formats).  I use only a small subset of
gitlab's features, but it works well for me.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Resizing jpegs, limiting file size

2017-05-22 Thread Craig Sanders via luv-main
On Sun, May 14, 2017 at 07:02:41PM +1000, Tony White wrote:

> #!/bin/sh
> # run this is script in the image folder
> # change the value 320 to whatever width you want
> response = 320
> for f in *; do
> # prefix the results with sm_ or change to what you want
> convert $f -resize $response +profile "*" sm_$f
> done

You really should quote your variables.

If any of the filenames have spaces (or other problematic characters
like *, ?, [, (, and other characters with special meaning to the
shell) in them, this script will break.

Those are all valid characters in a filename on all linux
filesystems. There are only two characters which are not valid in a
filename, a forward-slash and a NUL byte.  Anything else is permitted.

To avoid any problems cause by such annoying characters, you should
surround variables names with double-quotes when you use them, for the
same reason you quoted the "*". e.g.

convert "$f" -resize "$response" +profile "*" "sm_$f"

There are other reasons to quote variables, especially when you can not
have complete knowledge or control over the value of a variable.

As a general rule, **ALWAYS** double-quote variables when you use them.
It can pretty nearly never hurt to double-quote a variable, but there
are plenty of situations using an unquoted variable can cause severe
damage.

(NOTE: there are some very specific rare situations where you don't want
to and shouldn't quote variables, but if you´re capable of writing
the kind of shell code that requires that, you'll know when not to
quote. Otherwise, just quote variables all the time: 99 times out
of 100 (or more) it is exactly the right, safe, and correct thing
to do. Or, to put it another way, unless you know exactly WHY you don't
want to quote a specific variable, then quote it)

BTW, this script has clearly not been tested - typed in from memory?
There should be no space between the variable name, the equals sign, or
the value being assigned. ´response=320' would assign the value 320 to
the variable $response. 'response = 320' does not, it is an attempt to
run a command called 'response' with two command line arguments '=' and
'320'.

Also BTW, use double-quotes when you need to interpolate variables or
sub-shell results etc into a string, and it's best to use single-quotes
otherwise, like so:

convert "$f" -resize "$response" +profile '*' "sm_$f"

That's the key difference between double and single-quotes - single-quotes
are for static, fixed, literal strings.  Double-quotes are for for variables
and other dynamically generated output.

Finally, when passing filename arguments to programs that understand the
GNU convention of '--' to indicate the end of program options, it's also
always a good idea to use it to prevent filenames beginning with a '-'
from being interpreted as options. e.g. if a directory contains a file
called '-rf' in it, there is a huge difference between running 'rm *'
and 'rm -- *'


craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: IDE & Tasks Lists ~ Cross Platform

2017-05-26 Thread Craig Sanders via luv-main
On Thu, May 25, 2017 at 02:59:47PM +1000, luv-main@luv.asn.au wrote:
> > and following the required coding style,
>
> As for "required", I let the team drive the coding style requirement, as
> my only needs were consistency and readability. Since the team had set
> the style standard (mostly pilfered clauses), it was self-enforcing. Any
> cowboy was soon lassoed by annoyed colleagues.

by style requirements, i was mostly referring to formatting like spaces,
tabs, end-of-line markers etc that you mentioned. also things like where
braces belong - stupidly wasting a whole screen line on a brace or
putting it on the end of the line starting the block (e.g. subroutine
definition) where it belongs :)

and sometimes other stuff like self-documenting code with common
templates for functions etc (input, output, notes including algorithm
summary, known bugs and limitations, etc)


> > built-in CI tools (e.g. you can configure it to automatically try to
> > compile the software on every commit and report the outcome to the
> > developer), and more.
>
> Maybe things are different in the embedded world, but I can't remember
> a developer submitting code which didn't build - that would be a
> professional embarrassment never lived down. And the makefile could
> readily support a "make commit", to automate a pre-commit build. I
> have to admit to relying on healthy paranoia to ensure I checked that
> it built, before the commit.

i've seen lots of developers write and submit code that compiles
perfectly on their machine in their heavily customised (idiosyncratic
mess) environment that fails to build anywhere else. automated tools
to build and run a test suite on every checkin is amazingly useful for
catching such problems early.

craig

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


Re: Reworking filenames

2017-05-04 Thread Craig Sanders via luv-main
On Thu, May 04, 2017 at 02:42:00PM +1000, Andrew McGlashan wrote:
> That's okay, so long as there is at least one target file, otherwise it
> fails.
>
> I've added a test before now and dropped the ls in the for.

or you could just set nullglob in the script.

From the bash man page:

nullglob   If set, bash allows patterns which match no files (see
   Pathname Expansion above) to expand to a null string,
   rather than themselves.

e.g.

$ for i in *.doesnotexist ; do echo $i ; done
*.doesnotexist

$ shopt -s nullglob

$ for i in *.doesnotexist ; do echo $i ; done
$

Note that this only affects interpretation of glob patterns, not fixed
strings.  so 'for i in 1 2 3 4 5; ...' still works as expected because there
aren't any glob/wildcard characters in it.


you can put the nullglob setting and for the loop in a different scope (e.g. a
sub-shell or a function) if you need to avoid nullglob having unwanted
side-effects on other parts of the script.


but, really, why write a 58 line shell script when a oneliner rename command
(or 2 or 3 lines when reformatted for readability) is all that's needed?

or one of the many similar utilites to do this extremely common task (most of
which are both harder to use and less functional than the perl rename tool).

craig

ps: the previously mentioned Bash Pitfalls and the Bash FAQ at the same site
are, IMO, **ESSENTIAL** reading for anyone wanting or needing to write bash
scripts. also useful for other sh dialects.

http://mywiki.wooledge.org/BashPitfalls

http://mywiki.wooledge.org/BashFAQ

I also highly recommend the Unix & Linux Stack Exchange site for anything to
do with shell, awk, sed, etc scripting.  Searching there will almost certainly
find good solutions for whatever you're trying to do and if not, you can
always ask your own question and get a good answer in short order.

https://unix.stackexchange.com/ 

--
craig sanders 
___
luv-main mailing list
luv-main@luv.asn.au
https://lists.luv.asn.au/cgi-bin/mailman/listinfo/luv-main


  1   2   3   >