Re: [gentoo-user] app-misc/ca-certificates

2021-06-01 Thread Grant Taylor

On 5/31/21 11:15 PM, William Kenworthy wrote:

And another "wondering" - all the warnings about trusting self signed
certs seem a bit self serving.


No, it's not self serving.

Considerably more people than public certificate authorities bemoan self 
signed certificates.


Consider this:

1)  Your web site uses a self signed certificate and you have trained 
users to blindly accept and trust the certificate presented to them.
2)  Someone decides to intercept the traffic and presents a different 
self signed certificate to the end users while proxying the traffic on 
to you.
3)  Your end users have no viable way to differentiate between your self 
signed certificate and the intercepting self signed certificate.


Without someone - which you trust - vouching for the identity of the 
party that you're connecting to, you have no way to know that you are 
actually connecting to the partying that you are intending to connect to.


Yes, they are trying to certify who you are, but at the expense of 
probably allowing access to your communications by "authorised parties"


Nope.  Not at all.  (Presuming that it's done properly.  More below.)

The /only/ thing that the certificate does / provides is someone - whom 
end users supposedly trust - vouching that you are who you say they are. 
 The CA has nothing in the actual communications path.  Thus they can't 
see the traffic if they want to.


The proper way configure certificates is:

1)  Create a key on the local server.
2)  Create a Certificate Signing Request (a.k.a. CSR) which references, 
but does not include, the key.

3)  As a CA to sign the CSR.
4)  Use the certificate from the CA.

The important thing is that the key, which is integral to the encryption 
*NEVER* *LEAVES* *YOUR* *CONTROL*!


Thus there is no way that a CA is even capable of getting in the middle 
of the end-to-end communications between you and your client.


There have been some CAs in the past that would try to do everything on 
their server.  But in doing so, they violate the security model.  Don't 
use those CAs.


*YOU* /must/ generate the key /locally/.  Anything else is broken security.

(such as commercial entities purchasing access for MITM access - 
e.g. certain router/firewall companies doing deep inspection of 
SSL via resigning or owning both end points).


This is actually exceedingly difficult to do, at least insofar as 
decryption and re-encrypting the traffic.  Certificate Transparency logs 
help ensure that a CA doesn't ... inadvertantly ... issue a certificate 
that they should not.  Or at least it makes it orders of magnitude 
easier to identify and detect when such ... mistakes happen.


There is also the Certificate Authority Authorization record that you 
can put in DNS that authorizes which CA(s) can issue certificates for a 
domain.  A few years ago we passed the deadline where all CAs had to 
adhere to the CAA record.  As in the Certificate Authority / Browser 
forum / consortium / term??? has non-renewed anybody who wasn't adhering 
to CAA.  This is water so far under the bridge that it's over the 
waterfall, out to ocean, evaporated, and is raining down again.


Also, DNSSEC protects DNS in that it makes it possible to authenticate 
the information you receive.  Thus you can detect when things aren't 
authenticated and you know they should be.


If its only your own communications and not with a third, commercial 
party self signed seems a lot more secure.


Nope.  3rd parties don't have access to the encrypted communications. 
The only thing they have access to is saying if you are you or not. 
Yes, that's Bob over there in the corner.  But I have no idea what he's 
talking about b/c MATH.


Note the words "signed" and "signing".  A Certificate Authority signs a 
certificate signing request, thus vouching for the identity of the 
entity submitting the CSR.  You obviously can sign your own CSR.  That's 
what a self-signed certificate comes from.  But you have nobody vouching 
for who the far entity is, much less who vouched for them.


Spekaing of who vouched for them, and how do we trust them?  That's 
where the hashes in /etc/ssl (or wherever it is) come into play.  Your 
system has a public key for /trusted/ root CAs.  Thus when your system 
sees a certificate signed by a CA, it computes the hash, looks for the 
public key as the hash file on your local system.  If the file exists 
and all the math passes, then the root certificate is trusted.  If the 
root certificate is trusted, then your system will trust the certificate 
that the CA is vouching for.


This is all ... something ... having to do with who is vouching for whom 
and do you trust the vouching party or not.


But at no time does a CA have access to the encrypted communications. 
As long as things were done properly in that the keys were generated 
locally.




--
Grant. . . .
unix || die



Re: [gentoo-user] app-misc/ca-certificates

2021-06-01 Thread Grant Taylor

On 5/29/21 12:26 AM, Walter Dnes wrote:
Looking through them is "interesting".  There seem to be a lot of 
/etc/ssl/certs/.0 files, where "?" is either a random number 
or a lower case letter.


They aren't random at all.  They are a fingerprint (hash) of signing (?) 
certificates.  The fingerprint is generated in a deterministic manner.


The sym-links (or hard links) are a convenient way to associate a hash 
back to the cert file that it's representing.


root@host#  ln -s /path/to/cert /etc/ssl/certs/$(openssl x509 -noout 
-hash -in /path/to/cert)


The hash is what things validating things use.  They have no good way to 
determine what the file name would be.  So they compute and look up the 
hash.


You could name all the files with hashes.  But that would make it quite 
annoying ~> difficult, impractical, bordering on impossible for a human 
to maintain.  So, instead, the trusted root certificates are stored by a 
human friendly name and the hashes point to the file via a sym-link.


These all seem to be symlinks to /etc/ssl/certs/.pem. 


Quite likely.

Each of those files is in turn a symlink 
to/usr/share/ca-certificates/mozilla/.crt.


Maybe / probably.  Definitely for root certificates that are part of the 
Mozilla Security Suite.  But it's definitely possible to have other root 
certificates through the same system.  E.g. you run your own private / 
enterprise CA.



Any other suspicious regimes in there?


I'm confident that it depends on where you are in the world.

Let's keep things apolitical and purely technical.



--
Grant. . . .
unix || die



Re: [gentoo-user] Qustions re Dell M.2 PCIe NVMe Solid State Drives under Gentoo

2021-05-27 Thread Grant Taylor

On 5/27/21 4:47 PM, Walter Dnes wrote:
Showing my age... I started using linux on a spare machine with 16 
***MEGA***bytes of ram approx year 1999 or 2000, and the ram was 
perfectly sufficient.


Yep.  I did similar.

Though I think /what/ is done *and* /how/ it is done are significantly 
different now than they were ~20 years ago.


Thunderbird and Firefox are my big RAM consumers that are open all the 
time.  Virtual Machines use more (and are one of the reasons I'm 
building a new system) memory, but they aren't open all the time.


Ignoring the SSL / TLS connection issues, I don't know if a distro from 
~20 years ago could do what I want to do today.  I know that my email 
workload is multiple orders of magnitude larger today than back then. 
/If/ Netscape Communicator /can/ handle my email (via an SSL / TLS proxy 
like stunnel), then we have some big problems.  --  /me grumbles as he's 
now probably going to end up trying things.


Aside:  I wonder if I can get around the SSL/TLS issue by leveraging 
IPsec to protect credentials between the external IPs and configure 
Netscape Communicator for stock IMAP & SMTP.




--
Grant. . . .
unix || die



Re: [gentoo-user] Qustions re Dell M.2 PCIe NVMe Solid State Drives under Gentoo

2021-05-27 Thread Grant Taylor

On 5/27/21 3:05 PM, Walter Dnes wrote:
All current XPS models seem to have 256G or 512G M.2 PCIe NVMe Solid 
State drives in the base configuration.  Questions...


* do NVMe drives function well under Gentoo (driver issues, etc)?


I've not had any problems with them.  They do show up as a different device:

   /dev/nvme0n1p#

Where # is the partition number.  I think /dev/nvme0 might be the first 
NVMe controller as the only NVMe (card?) that I have is /dev/nvme0n1.



* how long do they hold up (wear and tear)?


I've been using the inexpensive ~> cheap one that I have for more than 
18 months.  I'm using a partition on said NVMe as a cache for my ZFS 
pool.  I've not yet seen any symptoms of problems.


I will say, that you want to make sure the system has PCIe that's 3.0 or 
better to take advantage of the full speed.  --  My existing NVMe is in 
a PCIe-16x slot to NVMe adapter card.  It only uses 4x lanes, but the 
other are physically occupied holding the card.


I'm building a new system, to replace the 5+ year old XPX w/ PCIe 2.0 
that has the card and I'm replying from with a newer system with quite 
similar, save for PCIe 3.0.  The speed difference between the NVMe in 
the systems is insane with the faster PCIe bus.



* can I simply disable them if I run into problems?


That depends.

If they are used for part of the operating system, as in your boot / 
root drive, then simply disabling them will be ... problematic.  If 
however, you are using it as a non-essential drive, or not using it at 
all, then sure, you can disable it.




--
Grant. . . .
unix || die



Re: [gentoo-user] [OT] tar exclude syntax tip

2021-05-05 Thread Grant Taylor

On 5/5/21 7:33 AM, Walter Dnes wrote:
3) All directories and/or files to exclude must be listed as relative 
paths to the directory being tarred, i.e. last parameter on the 
command line.


This might not be very clearly articulated in the manual et al., but 
once you are aware of it, you see evidence of it in multiple places.


I'm used to tar stripping the leading character if it's a forward slash 
from paths.  Thus tar actually works with relative paths based on the 
(effective) working directory (which can be changed via the '-C' / 
'--directory=' option).


So, after many years and even more battle scars, /I/ would /expect/ that 
the path must be relative to the directory that tar is working on. 
Perhaps I've simply survived more duels with tar.  This is nothing 
against you or anyone.  After all, xkcd has this to say about tar:  I'M 
SO SORRY.


Link - TAR
 - https://xkcd.com/1168/



--
Grant. . . .
unix || die



Re: [gentoo-user] Re: File transfer via USB?

2021-04-25 Thread Grant Taylor

On 4/25/21 4:08 PM, David M. Fellows wrote:

A quick Duckduckgo search for "linux journal" grant edwards yields

https://www.linuxjournal.com/article/2880


Thank you for the link Dave.

I'll read that later tonight.


Still available. Reading it takes me back...


:-)



--
Grant. . . .
unix || die



Re: [gentoo-user] Re: File transfer via USB?

2021-04-25 Thread Grant Taylor

On 4/25/21 12:14 PM, Grant Edwards wrote:
Nope. Many years ago I used UUCP a number of times for "production" 
projects involving data gathering from remote systems via dial-up.


:-)

25+ years ago, I wrote an article about one of those projects for 
Linux Journal.


Can you narrow that down any more?  I'd like to go find a copy of it and 
read it.



UUCP would work a treat for the problem I initially posed.


:-)

Back in the day, I always used Taylor-UUCP, and I'm pretty sure I could 
get it working again on Linux, but I have little confidence I could 
get it working on Win10.


I'm quite confident that Taylor-UUCP can be made to work on contemporary 
Linux and macOS (Big Sur and it's predecessors) as I've got exactly that 
working.  What's more, I've got it working across SSH as a transport in 
lieu of serial connections.



I'd probably have better luck with Kermit.


Please elaborate.

I know very little about Kermit.  I've looked at it a number of times 
over the last 20 years, but it's never presented a feature that I 
couldn't do with other things.  I don't know if I'm just too late to the 
Kermit game to properly appreciate it or if I'm completely missing 
something.


My fear is that Kermit would be more manual for transferring files one 
at a time.  Maybe I'm wrong.


Hence, please elaborate about Kermit.  ;-)

It turns out the initial requirements that I was given were wrong, and 
the Windows machine does have some limited Internet access via a VPN 
and proxy, and I can get files to/from the Windows machine that way.


~chuckle~

That happens.


So my initial question is moot.


;-)



--
Grant. . . .
unix || die



Re: [gentoo-user] File transfer via USB?

2021-04-25 Thread Grant Taylor

On 4/25/21 11:39 AM, k...@aspodata.se wrote:

I doubt that many are fluen in cu and uucp,


I think that lack of knowledge / dumb / ignorant about something is (or 
can be) a relatively easy problem to solve.


As in there is (or was) no knowledge about something and there will be 
(or is) knowledge at some point in the future.



are they even available on the ms-win side.


I used Taylor-UUCP (no known relation) on Windows XP a number of years 
ago via Cygwin.  I suspect that Windows Subsystem for Linux could be 
pressed into service for similar today.



Who knows, this might work  https://www.uupc.net/


Probably.

You can always build yourself two pairs of RS232 to RS422 converters 
and see where the limit on your serial ports are.


What would the RS232 to RS422 get you?  Why not just use RS232 between 
the computers?  What does RS422 provide to make the extra conversions 
worth while?


You can even try with other deserializes/serializers. What speed do 
you want ?


Probably.  I would be afraid that they would have a lower RoI, 
particularly on the effort front.




--
Grant. . . .
unix || die



Re: [gentoo-user] File transfer via USB?

2021-04-25 Thread Grant Taylor

On 4/23/21 7:45 PM, k...@aspodata.se wrote:

Grant:


I think you are conflating me for the OP.  Easy to do with the same 
first name.  ;-)


In that case, your usb-connection (or anything) will probably be a 
borderline case to, since that is also a network...  But I guess the 
thing fobidden is anything makeing the ms-win box recognize and use 
somthing to communicate outwards.


I agree and that such is a possibility and is something that Grant 
/Edwards/ -- the OP -- will need to make a judgement call on.



Don't know much about the windows side, but I found this:
  https://stackoverflow.com/questions/766912/raw-ethernet-frames-using-winsock
  https://www.winpcap.org/
  
https://hacked10bits.blogspot.com/2011/12/sending-raw-ethernet-frames-in-6-easy.html
seems to be some programming involved.


That's about what I had expected.  Not many, if any, ready built tools 
for transferring raw Ethernet frames.  Though plenty of scaffolding to 
create it.



Seems it never was, though there were patches:
  https://flylib.com/books/en/3.151.1.29/1/


That's about what I remember.


4.18-rc1 it seems.


Thank you for finding and sharing that milestone.

Aside:  I don't like using "milestone" to describe the point that was 
removed.  Particularly something I think is good.



Ah, forgot that one.


;-)


About the original question. Here what a few thing I dig up.

https://www.amazon.com/Laplink-High-Speed-Transfer-Cable-PCmover/dp/B0093H83DW


This is the USB version of the Laplink cable concept.

Decidedly different than the old serial & parallel versions of which 
I've used many times.  LapLink, INTERLNK.EXE & INTERSVR.EXE, and '95's 
Direct Cable Connection used them.  I suspect there may have been more 
that I'm not aware of.



https://sourceforge.net/projects/lptransfer/


Interesting.

I'm not sure why a separate program was needed.  Maybe it didn't 
monopolize the server side the same way that INTERSVR.EXE did.


Because INTERLNK.EXE would map a drive to the server and return control 
to the command prompt / batch script allowing use of the new drive letter.



https://github.com/viveris/uMTP-Responder


Interesting.

This is most likely to be the lease problematic considering that it 
turns the Linux end into a /special/ USB flash drive.


It is usually simple to setup and use a serial null-modem cable and 
run kermit or somthing on the MS-Win side and add a getty (I've used 
mgetty) handling the serial port on the linux side.


Is it wrong that the first thing that came to mind when reading the OP's 
post is UUCP with as high speed serial as possible?


I wonder if the USB LapLink (type) cable or USB On The Go gadget cables 
could present as a multi-megabit serial interface.




--
Grant. . . .
unix || die



Re: [gentoo-user] File transfer via USB?

2021-04-23 Thread Grant Taylor

On 4/22/21 9:25 AM, k...@aspodata.se wrote:

No IP doesn't prohibit ethernet.


I agree technically.  Though I suspect it /may/ be problematic with the 
spirit behind / motivating the ban on IP.



You could possible use:
  raw ethernet frames


Do you have any recommendations of utilities for each side?


  netbeui
samba


I thought that Samba has *LONG* been NetBIOS over TCP/IP (a.k.a. NBT). 
Is NetBEUI code /still/ in Samba?



  ethertalk (appletalk)
http://netatalk.sourceforge.net/
  ipx (netware)
ftp://ftp.koansoftware.com/public/opensource/mars_nwe/mars_nwe-0.99.pl21.tgz


I believe that IPX support has been removed from 4. kernels. 
Maybe 5..


DECnet Phase III or Phase IV.


I have previously (in the 90's) used mars, worked great.


I've never run MARS but I've done more than a little with Novell 
NetWare.  I recently had a 4.14 kernel mount an NCPFS from a server. 
(4.14 obviously still has IPX.)




--
Grant. . . .
unix || die



Re: [gentoo-user] mouse very sluggish in Virtualbox

2021-04-10 Thread Grant Taylor

On 4/10/21 6:41 PM, the...@sys-concept.com wrote:
I have: AMD Ryzen 5 3400G with Radeon Vega Graphics and I don't know if 
this is the problem with Graphic integrated CPU or Virtualbox-6.1.16-r1


I run Windows 7 in Virtualbox and browsing file manager files in 
Windows is very slow.  Sometime I click on a file and have to wait 
second or two before file is highlighted.


Gentoo is running normally and respond normally.  I run same Windows 
7 in Virtualbox on: AMD Ryzen 7 3800XT 8-Core Processor and browsing 
file is more responsive.


Make sure that you have the guest additions installed and that the VM 
(shells) are configured the same way.


I've frequently had poor response and / or missed mouse clicks as a 
result of not having the guest additions installed.




--
Grant. . . .
unix || die



Re: [gentoo-user] IPsec

2021-04-06 Thread Grant Taylor
Pre-Script:  I'm probably in a bad mental state to reply, but I want to 
answer some valid questions before others reply.  Please take what I say 
and how I say it with a grain of salt.  I don't mean anything personally.


I /do/ appreciate the constructive and thought provoking responses that 
I'm getting.


On 4/6/21 1:07 PM, Sid Spry wrote:

Can you clarify why you need to use IPsec?


I don't have a /need/ in any normal sense.  But I do /want/ to mess / 
play with and learn about /IPsec/.  --  I have used many other VPNs; 
OpenVPN and WireGuard.  But I'm finding my understanding of IPsec 
lacking, hence my desire to learn about /IPsec/, specifically 
/transport/ mode.


If it is to support a commercial client you may be better off 
handing them a system based around BSD.


*blink*

Nothing against any of the BSDs, or any other Unix for that matter.  But 
... I think this is a /Linux/ mailing list.  ;-)  So ... suggesting 
something other than Linux seems counterproductive.


More flexibility will be had from Linux, but pfSense/OPNsense gives 
you a point and click web terminal which is easier to train in house 
IT on due to the documentation available.


I'd like to add IPFire to that list.  Especially considering that it's 
Linux based.  ;-)


The modes are also usually sufficient -- site to site tunnel (like 
the appliances you're used to using), intranet protection, and routing 
options for the same.


"Usually" being the operative word.  "Sufficient" being in the eye of 
the beholder.


*I* /personally/ _frequently_ fall outside of "usually".  Being the 
person that I am, what is "sufficient" for the vast majority of people 
leaves me wanting.



If you control everything you can use wireguard or OpenVPN.


If it wasn't for the fact that I'm wanting to play with / learn about 
IPsec, I would completely agree with you.  However, my desire to learn 
about /IPsec/ is in direct conflict with your otherwise reasonable 
suggestion.



To answer some of your later questions in summary:
1. Of the projects libreswan seems to best maintained, though openswan 
still releases regularly. I would start with libreswan. For racoon, 
see https://www.netbsd.org/docs/network/ipsec/rasvpn.html.
2. Yes, see 
https://libreswan.org/wiki/VPN_server_for_remote_clients_using_IKEv2. 
Don't worry about embedding key material in your scripts (unless 
you expect someone has bugged your monitor). The key material has 
to be on disk in some form anyway.


Please allow me to elaborate.

I view -- what I understand to be the quintessential mode of operation 
for -- Pre Shared Keys to have a security weakness ~> flaw in that both 
ends must know the PSKs used for each direction.  Thus compromising 
either end completely compromises the security of the connection.


Further, if we assume that a per-system key is used for {en,de}cryption 
(pick one, I don't think it matters which), then we can probably further 
assume that the same per-system key is used for {en,de}cryption with 
other additional systems.  As such, compromising the PSK(s) on one 
system likely compromises at least one of the PSK(s) for other systems.


Whats' more is that PSKs tend to be static.  --  Maybe there are ways 
with IKE to use PFS to ensure minimize damage done by knowing PSK(s).


I feel like most, if not all, of this is avoided by not having PSK(s) or 
other keying material in scripts and on systems.


We are probably all familiar with having a TLS certificate key pair on 
systems these days.  So if we can leverage and re-use those key pairs, 
that would be Really Nice™.


Typical usage has the tunnel creation commands referencing key 
material.


There is a difference in referencing a PSK and referencing a key pair. 
Especially when looking at the output of ps.



Bash disables history in noninteractive shells by default.


I feel like relying on a default, which can be changed, is not a good 
basis for security.  }:-)


3. Drop opportunistic encryption. It's best if you or the user knows 
if the network is secure or not.


Agreed.

The O.E. is more to allow other systems to be able to communicate with 
my system /more/ securely if they want to.


There are also ways to have IPTables allow IPsec protected traffic while 
blocking unprotected traffic.  Thus providing the hard pass / fail that 
I think you're alluding to.


4. The authentication header (AH) does not provide 
"security."


What does "security" mean?

I agree that AH does not provide /confidentiality/.

Encapsulating security payload (ESP) provides confidentiality and, 
if selected, authentication. Check the docs -- usually you want 
authentication and confidentiality, merely confidentiality allows 
some classes of attacks.


I will check out the authentication option for ESP.

Though, I suspect it's going to be quite a bit more difficult to pull 
off a MitM with ESP that's only providing confidentiality assuming that 
proper authentication has recently happened in conjunction with 

Re: [gentoo-user] IPsec

2021-04-06 Thread Grant Taylor

On 4/6/21 8:09 AM, J. Roeleveld wrote:
I only managed to get it working between off-the-shelve devices, 
but would prefer to do it from Linux.


That's where some of my experience is; SOHO routers, 15+ years ago.  I 
think I did manage to get FreeS/WAN (at the time) to establish a VPN 
with one of the SOHO routers that I was using at the time.


But I've started to get some more experience using IPsec without IKE 
recently.



Please keep it on the list so I can participate in the process.


Okay.  Here's a copy of what I've sent to the handful of people that 
replied to me in the varies places I sent the broadcast.


I'll elaborate on the things that I'm pondering below.

- ip xfrm - I'm currently dabbling with IPsec transport mode between 
some systems using the following commands:


--8<--
 1AKEY1=0x$(xxd -c 32 -l 32 -ps /dev/random)
 2AKEY2=0x$(xxd -c 32 -l 32 -ps /dev/random)
 3AID=0x$(xxd -c 4 -l 4 -ps /dev/random)
 4ASRC="$LeftIP"
 5ADST="$RightIP"
 6ALOCAL="$ASRC"
 7AREMOTE="$ADST"
 8echo "Run the following commands on $LeftHost."
 9ip xfrm state add src $ASRC dst $ADST proto esp spi $AID 
reqid $AID mode transport auth sha256 $AKEY1 enc aes $AKEY2  # b out 
state (SA)
10ip xfrm policy add src $ALOCAL dst $AREMOTE dir out tmpl src 
$ASRC dst $ADST proto esp reqid $AID mode transport  # b out policy
11ip xfrm state add src $ADST dst $ASRC proto esp spi $AID 
reqid $AID mode transport auth sha256 $AKEY1 enc aes $AKEY2  # b in 
state (SA)
12ip xfrm policy add src $AREMOTE dst $ALOCAL dir in tmpl src 
$ADST dst $ASRC proto esp reqid $AID mode transport   # b in  policy


13echo
14echo
15echo

16echo "Run the following commands on $RightHost."
17ip xfrm state add src $ADST dst $ASRC proto esp spi $AID 
reqid $AID mode transport auth sha256 $AKEY1 enc aes $AKEY2  # d out 
state (SA)
18ip xfrm policy add src $AREMOTE dst $ALOCAL dir out tmpl src 
$ADST dst $ASRC proto esp reqid $AID mode transport  # d out policy
19ip xfrm state add src $ASRC dst $ADST proto esp spi $AID 
reqid $AID mode transport auth sha256 $AKEY1 enc aes $AKEY2  # d in 
state (SA)
20ip xfrm policy add src $ALOCAL dst $AREMOTE dir in tmpl src 
$ASRC dst $ADST proto esp reqid $AID mode transport   # d in  policy

-->8--

This is working and does enable IPsec /transport/ /mode/ between 
$LeftHost and $RightHost.  But it's completely manual at the moment.


I'm curious if you have any comments on "ip xfrm".

- strongSwan / Libraswan / OpenSwan / FreeS/WAN - I dabbled with 
FreeS/WAN the better part of 20 years ago.  It worked at the time.  But 
I've not needed or wanted to do anything with IPsec again until 
recently.  --  I've taken a foray through OpenVPN and WireGuard, both of 
which were decidedly easier than IPsec.


It's my understanding that OpenSwan and strongSwan are direct forks of 
FreeS/WAN and that Libraswan is a fork or rename of OpenSwan.


What I'm not sure of is what the actual current status of the *Swan(s) is.

Also, how do the *Swan(s) relate to racoon, which I see reference as 
being independent.


- X.509 certificate based authentication - One of the reasons my script 
above is manual is because I don't want to embed keying material in 
config files on the VPSs that I'm using IPsec transport mode between. 
I'd like to figure out if it's possible to use X.509 certificates to 
have the two IPsec endpoints authenticate against each other and 
dynamically negotiate keying material based on their public & private 
key pairs that they already have.


E.g. can $LeftHost use use it's private key to authenticate itself to 
$RightHost and vice versa?


I presume that this would be done via IKE, and I further presume that it 
will likely be IKEv2.


- Opportunistic Encryption - I really like the idea of IPsec 
Opportunistic Encryption so that systems can dynamically / automatically 
configure and use IPsec /transport/ /mode/ encryption between each other.


- AH vs ESP - Do the cryptographic primitives of ESP supplant AH in 
confirming ~> authenticating that the traffic came from the host that is 
sending the traffic?  E.g. can ESP offer the same authentication that AH 
does?  Or are AH and ESP truly different functions which don't overlap?


- Transport vs Tunnel Mode - I'm really interested in /transport/ mode 
more than I am tunnel mode.  I'd like to get my various servers to use 
IPsec /transport/ mode configured (much like my script) to protect all 
of the traffic between them.


I did some playing this weekend with /transport/ mode between my Linux 
router at home and one of my VPS(s).  Yes, my Linux router is 
functioning as a basic NATing router.  But, it occurred to me 
/transport/ mode might work between my router and my VPS(s) in that 
Linux /was/ doing the /NAT/ing.  Meaning that it was effectively the 
endpoint of the traffic.  Thus the 

[gentoo-user] IPsec

2021-04-04 Thread Grant Taylor

Hi,

Does anyone have any experience with IPsec?  Preferably on Gentoo or 
Linux in general?


I'd like to discuss some things (probably off list) while wading into 
the IPsec pool.  E.g.:


 - ip xfrm ...
 - strongSwan
 - Libraswan
 - X.509 certificate based authentication, preferably /mutual/
 - Opportunistic Encryption
 - Transport Mode
 - Tunnel Mode



--
Grant. . . .
unix || die



[gentoo-user] OpenRC vs SysV init scripts.

2021-03-24 Thread Grant Taylor

Hi,

Does anyone have any pointers on where to start on converting a 10-15 
year old SysV style init script to OpenRC?


I'm starting to use something that includes an ancient SysV style init 
script and trying to get it to work under OpenRC init properly on boot.


It seems as if the SysV init scripts don't start things on boot despite 
being in the default runlevel.  Yet I can reliably start / control the 
service with "rc-service $ServiceName start".


Any suggestions would be appreciated.



--
Grant. . . .
unix || die



Re: [gentoo-user] "sys-fs/exfat-utils" vs "sys-fs/exfatprogs"

2021-03-20 Thread Grant Taylor

On 3/20/21 11:35 AM, Neil Bothwick wrote:
I'm not saying there is a direct relationship, but the exfat-progs 
readme states it is for use with the new in-kernel fs while exfat-utils 
is from the same devs as the FUSE module.


Okay.

I'll accept what's written on the tin as what the targeted environment is.

I feel like what's written on the tin doesn't sufficiently address the 
OP's question:


On 3/20/21 9:22 AM, Dr Rainer Woitok wrote:

Can anybody comment on the pros and cons of either package?



--
Grant. . . .
unix || die



Re: [gentoo-user] "sys-fs/exfat-utils" vs "sys-fs/exfatprogs"

2021-03-20 Thread Grant Taylor

On 3/20/21 9:52 AM, Neil Bothwick wrote:
Looking at the github readme, it wold appear that exfat-progs is for 
use with the new in-kernel exfat fs, while exfat-utils is a companion 
to the older FUSE implementation of exfat.


Maybe I need more caffeine, but I can't see the /direct/ relationship 
between /user/ /space/ utilities and /kernel/ /space/ support for the 
same file system.


Is there something that I'm missing that would prevent exfat-progs (user 
space) with FUSE exFAT (kernel space) -or- exfat-utils (user space) with 
in-kernel exFAT (kernel space)?


#confused



--
Grant. . . .
unix || die



Re: [gentoo-user] root on nfs and multiple ip addresses

2021-03-19 Thread Grant Taylor

On 3/19/21 5:55 AM, William Kenworthy wrote:
Yes, its two IP's to the same MAC address.  Its a raspberry pi 3B 
using swclock so time may be an issue though I dont see how, but its 
still a different IP for each stage, but the logs are showing the 
same MAC address.  Google shows its a known problem and not just me - 
but its a reason why and then a better fix than stopping the OS from 
starting its interface that I am looking for.  Yes I understand what 
Joost is getting at and I do want to test it but I have a delay:


You might consider turning up logging on the DHCP server to see if it 
can give you any information on why it's handing out different IPs.  It 
may be detecting that the IP handed out last time is in use and not 
realizing that it's the same system.


I've had good luck with the BIND and INN user community mailing lists 
from ISC.  I suspect that the DHCP mailing list would be equally helpful.


Unfortunately an ill advised "rm -rf" in my storage system has left 
me restoring 7TB of data, so it will be a couple of days more before 
I have time again - thank-you borgbackup - you are a life saver


Ouch!  The good news is that you have the data backed up to be able to 
restore it.  Also, inadvertent and unexpected backup test.  :-j




--
Grant. . . .
unix || die



Re: [gentoo-user] Question about runlevels.

2021-03-18 Thread Grant Taylor

On 3/18/21 12:54 PM, Victor Ivanov wrote:

Yes


Okay.

Generally yes, when changing from one runlevel to another OpenRC will 
stop all services from the previous (current) runlevel and start the 
services for the next (new) runlevel.


Good.

However, my understanding is that the `boot' and `sysinit' runlevels are 
"special" and services started there are also included in all 
"non-special" runlevels.


Ah.

In your case, `myService-boot' should remain active in `default' along 
with `myService-default'. You can double check that by running


  $ rc-status


Hum.

I'm not seeing any overlap between services in the boot runlevel and 
services still running in the default runlevel.


cat <(rc-status | egrep "^ " | sort | awk '{print $1}') <(rc-update | 
grep boot | awk '{print $1}' | sort) | sort | uniq -c


All the services are listed one time.

If any of the boot services are still running as a default service, I 
would expect their count to be two.


It's difficult to say without understanding what they do and their end 
goal. It also depends on how the services are coded. If they try to bind 
to the same port then yes, chances are that `myService-default' will 
fail at this point as the former would still be running.


It doesn't really matter.

My question was about a boot service continuing to run in the default 
runlevel or not.  It doesn't matter what myService-boot and 
myService-default are.  They could be zfs-mount and sshd.


I'm only interested in if myService-boot continues to run in the default 
runlevel or not.


myService-boot| boot
myService-default |  default

If that's the case, one option is to separate their responsibilities 
and/or make `myService-default' depend on `myService-boot' and have it 
leverage whatever it is that `myService-boot' already provides.


Immaterial.



--
Grant. . . .
unix || die



[gentoo-user] Question about runlevels.

2021-03-18 Thread Grant Taylor

Hi,

Do services started in the "boot" runlevel continue to run in the 
"default" runlevel?


Or do they get stopped as part of transitioning from the "boot" runlevel 
to the "default" runlevel?  (Or any other runlevel that doesn't include 
the service.


I'm wondering about having two things that are very similar (as in use 
the same ports), but distinctly different (different configurations) 
with the following:


myService-boot| boot
myService-default |  default

Will myService-boot start and run during boot, then stop when the system 
goes into the default runlevel?


My expectation is that OpenRC will (try to) start myService-default when 
the system enters the default runlevel.  But it will fail if 
myService-boot is still running.




--
Grant. . . .
unix || die



Re: [gentoo-user] root on nfs and multiple ip addresses

2021-03-17 Thread Grant Taylor

On 3/17/21 8:59 AM, Neil Bothwick wrote:
Is something changing the MAC address of the Pi after initial 
boot? That would explain both the issue of two addresses and the 
consistency of them.


Compare packet captures of the various DHCP requests and make sure that 
they are the same.


There might be some optional parameter that is different between them, 
inducing the DHCP server to offer different addresses.




--
Grant. . . .
unix || die



Re: [gentoo-user] Why do we add the local host name to the 127.0.0.1 / ::1 entry in the /etc/hosts file?

2021-03-16 Thread Grant Taylor

On 3/16/21 6:16 AM, Michael wrote:

Yes, I won't argue against this all around rational position.


;-)

Thank you for the CRC / checksum on my logic and possibly even my position.

Fair enough.  It is clear to me your proposal won't break things. 
Quite the opposite it will eliminate the chance of being the cause 
of localhost misconfiguration breaking various services.


:-)

The syntax of /etc/hosts as presently configured in the Gentoo handbook 
doesn't even agree with the hosts man page installed by baselayout - 
the man page I believe follows the Debian convention.


That should be addressed as well.

I think that any concerns regarding DEs being able to resolve the 
systems FQDN (?) when using dynamic IPs should also be addressed.



ACK.  This and Samba AD is where this thread started I think.


Kerberos and AD (Windows or Samba) were the most poignant examples of 
why I thought having the FQDN resolve to 127.0.0.1 was incorrect.



I was talking about the domain name changing, not the host name.


I consider the domain name to be part of the host name.  But that's a 
different discussion.



my_laptop.home.com

my_laptop.work.com


Think about an email server, in different locations:

   smtp.branch-office-1.example.com
   smtp.branch-office-2.example.com

Remember that kernels only have a singular name, which is free form text 
string, including periods, as their host name.  As such, the kernel on 
each system should know it's own name as something that humans can 
differentiate between the two systems.  Thus, the output of `hostname` 
should return an FQDN.


With this in mind, and the methodology of using the same configuration 
everywhere, I think your notebook's hostname should be the same at home 
and at work.


There is an independent name for a given connection, which can, and 
frequently does, differ from what the attached system thinks the 
hostname is.  E.g. my home router thinking that it's FQDN is


   home-router-gw.home.example.net

While a reverse DNS lookup for it's IP will be something like

   dhcp-a-b-c-d.town.isp.example

But, like I said, that's another, different, probably larger conversation.

However, the hostname should be set in /etc/conf.d/hostname, 
or netifrc(?).


I think the /hostname/ is completely independent of anything network 
interface related.  So, /etc/conf.d/hostname.


Aside:  This also touches on the strong vs weak host model and what the 
interfaces & names belong to.  Linux by default uses the weak host model 
where IPs and interfaces belong to the system (thus any interface).


Right, the topic has been (re)visited a number of times.  I wonder 
what has brought about the hosts file syntax in the current version 
of the Handbook.


Inquiring minds

Perhaps it is time to file a bug to propose a way forward both on the 
Handbook and the Wiki pages to ensure network configuration remains 
consistent across the documentation.


Perhaps.

I do appreciate the sanity check on my logic, and the result of my logic.

Thank you for the discussion Michael.  :-)



--
Grant. . . .
unix || die



Re: [gentoo-user] Why do we add the local host name to the 127.0.0.1 / ::1 entry in the /etc/hosts file?

2021-03-13 Thread Grant Taylor

On 3/12/21 12:04 PM, Michael wrote:
Right.  That's the nub of it.  Samba, with AD-DC and Kerberos 
configuration deserves special attention and the apps devs advise 
accordingly.


I see it differently.

There's the sloppy / slipshod way that doesn't negatively effect /most/ 
things.  Then there's the better /  proper way that doesn't negatively 
effect anything.


I see no reason to ever do it the sloppy / slipshod way when it's simple 
to do it the better / proper way.


Yes, I recall apache would fail if you tried to contact 
http://localhost or its FQDN from the server itself, with something 
like "... host name not valid for this server", but it would serve 
the default "It works!" webpage when the server's FQDN was called 
from clients.  Anyway, all this is O/T from the main question.


It is on topic as supporting evidence to why the main topic, having the 
hostname on the 127.0.0.1 / ::1 IP in the /etc/hosts file, is a bad idea.


It doesn't, obviously the two files are fulfilling different purposes. 
You could however specify in the DC's host file any additional DNS 
servers in the AD DNS zone with their static IP addresses.  I tend 
to do this and also check the hosts file in the first instance when I 
forget what other machines play some (important) role on the current 
host's functions.  This is by no means a rule or even a recommendation 
for others to do the same.  ;-)


Ah.  So you're (ab)using the /etc/hosts file as a form of documentation 
to make life for future you easier.  Fair enough.  But call the spade 
the spade that it is.  State that you're putting the information there 
for documentation purposes, not because it's needed for some other reason.


I wouldn't call it majorly "wrong" on a standalone desktop use case, in 
the sense that it shouldn't break things - I think.


I would call a configuration that works in all cases to be superior to a 
configuration that only works in some cases and fails in other cases. 
As such I'm describing the inferior configuration as "wrong".


Address 127.0.0.1 is for internal consumption, it won't be seen by the 
external network and the host can refer to itself as its user desires.


External hosts will see the 127.0.0.1 / ::1 address when things, like 
Kerberos, use gethostbyname() and put the returned value into traffic 
that leaves the system.


Aside:  localhost / 127.0.0.1 / ::1 is /not/ unique to any system. 
Conversely a hosts name /is/ unique to /only/ the system.  Thus anything 
that wants the local host's unique name should never use / see localhost 
/ 127.0.0.1 / ::1.  As such, any time that a hosts unique name resolves 
to a non-unique address should be considered wrong.


Furthermore, LAN addresses and domains may change all the time on 
say a roaming laptop, so setting up aliases against a temporary LAN 
IP becomes cumbersome.


I never allow an external DHCP server (et al.) to specify the local 
system's host name.  Especially DHCP servers that I don't know, much 
less trust.


People's names don't change when they move to a different address.  At 
least this is the norm for the vast majority of people in the U.S.A.  I 
assume the same for the rest of the world.


Yes, specifying a FQDN against localhost doesn't align with the 
practice of most distros and a number of RFCs, therefore asking why 
the handbook offers this guidance without qualifying it is worth 
exploring further.


Very good point.

We have already established the handbook suggestion creates breakage on 
Samba with AD/DC, potentially on a webserver, and perhaps other server 
applications.  I agree using 127.0.0.1 for the special "localhost" 
hostname is cleaner and in these use cases the right solution.


Yes.

I recalled old bugs filed about this and had a look.  I don't know of 
other dev conversations/bugs and what might have produced the current 
guidance in the handbook:


https://bugs.gentoo.org/40203
https://bugs.gentoo.org/53188


These hint at other underlying bugs / (mis)configuration issues.

I can see why people might have chosen to hack around this problem by 
causing the host's name to resolve to 127.0.0.1 / ::1.  --  However, 
I'll argue that a better solution would be to add an additional IP 
address to the lo (or dummy) interface and make the name resolve to that.


Interestingly you attracted my attention to the man page for the 
hosts file, which I assume is installed by baselayout.  I noticed 
this example quoted at the bottom where 127.0.1.1 is used for the 
host's FQDN:


EXAMPLES
# The following lines are desirable for IPv4 capable hosts
127.0.0.1   localhost

# 127.0.1.1 is often used for the FQDN of the machine
127.0.1.1   thishost.mydomain.org  thishost


You can probably guess that I think this is a bug which should be corrected.

Or at the very least call out that this is inferior and can cause problems.

If the Gentoo handbook recommends something different, I think the devs 
should at least 

Re: [gentoo-user] how to install mailman3 on gentoo

2021-03-12 Thread Grant Taylor

On 3/11/21 7:37 PM, John Covici wrote:

I would appreciate some assistance.


I would highly recommend that you subscribe to the Mailman Users mailing 
list.


I have been subscribed to the MM-Users mailing list for a decade or more 
and have always found everybody to be quite helpful.  Mark S. is the 
current maintainer and he routinely responds to posts.  There are a 
handful of other frequent prolific helpers in the MM community too.


Tell them what you're wanting to do, where you're wanting to do it 
(Gentoo vs Ubuntu) and they will help you with anything Mailman 
specific.  They will probably try to answer more Gentoo specific 
questions, but may refer you back to a more Gentoo specific location 
(here) if things wonder too far from Mailman.  E.e. they may only be 
able to offer limited help on Gentoo specific init scripts for Mailman.


Go subscribe, and ask your questions.

Note:  I don't know if there is one common list or separate lists for 
MMv2 and MMv3.




--
Grant. . . .
unix || die



Re: [gentoo-user] Weird harddisk problem: AHCI disks sometimes not found

2021-03-11 Thread Grant Taylor

On 3/11/21 12:39 PM, Alexander Puchmayr wrote:

Hi there,


Hi,

I have a weird harddisk detection problem which rises the questio: 
what does the gentoo-kernel make differently than the ubuntu kernel?


Probably multiple things.  They probably have configurations that are at 
least slightly different.  I wouldn't be surprised if there is slightly 
different levels of patching too.


My understanding is that gentoo-kernel differs slightly from a vanilla 
kernel source.



Without the Ubuntu observation I'd say its a hardware problem


I'd still be inclined to question hardware.  But I agree that difference 
in behavior based on different software is suspicious.  I wonder if the 
Gentoo kernel is tickling a bug in the drive's firmware.


and the old HDDs are simply beyond their age, but why are they working 
in ubuntu and not in gentoo?


I don't think that older drives would fail in the way that you are 
describing.


And what is it doing with BIOS/Harddisk that even Bios does not find 
it anymore?


That sounds to me like the drive itself is misbehaving and not 
responding the way the BIOS expects.



I need a full powercycle to make bios find it again.


That really sounds like the drive is having a problem.  Or that the 
Gentoo kernel is inducing the drive into a state that is a problem.


What happens if you unplug power and data cables from the drive and then 
reconnect them?  Does the BIOS then see the drive?


I'm wondering if it's the drive and / or controller that's getting wedged.

This indicates a gentoo kernel problem, and I have no idea where 
to start looking, and AFAIK there's nothing much to configure a 
SATA/AHCI drive.


As Mark indicated, you should be able to compare kernel configs.

I don't remember hearing about such a bug.  I wonder if the Gentoo 
kernel is trying to do something slightly different and tickling a 
subtle bug that is causing the drive and / or controller to lock up.


I'd think that it would be easy to remove power and data cables from the 
drive while the computer is powered on to see if that also revives the 
drive.



Any ideas?


Not really.  Just threads to chase.



--
Grant. . . .
unix || die



Re: [gentoo-user] Why do we add the local host name to the 127.0.0.1 / ::1 entry in the /etc/hosts file?

2021-03-11 Thread Grant Taylor

On 3/11/21 6:38 AM, Michael wrote:

The syntax is:

IP_address canonical_hostname [aliases...]


The man page for hosts has the following to say:


DESCRIPTION
This  manual  page  describes  the format of the /etc/hosts file. 
This file is a simple text file that associates IP addresses with 
hostnames, one line per IP address.  For each host a single line 
should be present with the following information:


 IP_address canonical_hostname [aliases...]

The IP address can conform to either IPv4 or IPv6.  Fields of the 
entry are separated by any number of blanks and/or tab characters. 
Text from a "#" character until the end of the line is a comment, and 
is ignored.  Host names may contain only alphanumeric characters, minus 
signs ("-"), and periods (".").  They must begin with an alphabetic 
character and end with an alphanumeric character.  Optional aliases 
provide for name changes, alternate spellings, shorter hostnames, 
or generic hostnames (for example, localhost).  If required, a host 
may have two separate entries in this file; one for each version of 
the Internet Protocol (IPv4 and IPv6).


I want to call out "For /each/ /host/ a *single* *line* should be 
present" and "a host /may/ /have/ *two* /separate/ /entries/ in this 
file; *one* /for/ /each/ /version/ /of/ /the/ /Internet/ /Protocol/".


I interpret this to mean that any given host name (alias or canonical) 
should appear on at most one line per protocol family.


As such, having the local host's name, qualified or not, appear on 
multiple lines for the same protocol is contrary to what the man page 
states.




--
Grant. . . .
unix || die



Re: [gentoo-user] Why do we add the local host name to the 127.0.0.1 / ::1 entry in the /etc/hosts file?

2021-03-11 Thread Grant Taylor

On 3/11/21 6:38 AM, Michael wrote:
I'm losing my thread in this ... thread, but what I'm trying to say 
is the AD/ DC and Kerberos way of processing the /etc/hosts entries, 
when an /etc/hosts file is used, is different to your run of the mill 
Linux box and server.


I disagree.

First, AD/DC ~ Kerberos don't process the /etc/hosts file.  They do ask 
the system to resolve names to IP addresses.


Second, the system will process the /etc/hosts file, DNS, NIS(+) in the 
order configured in the /etc/nsswitch file so that it can resolve names 
to IP addresses for programs that ask it to do so.


Third, both non-AD / non-Kerberos and AD / Kerberos systems ask the 
system to resolve names to IP addresses.  Further, I'll bet dollars to 
donuts that they call the same functions and use the same subsystems.


I will agree that non-AD / non-Kerberos systems are not sensitive to -- 
what some consider to be -- the misconfigurations that AD / Kerberos 
systems are.


The Samba link in a previous message makes it clear the DC must have 
a DNS domain, which corresponds to the domain for the AD forest, 
this will be used by the Kerberos AD realm; and, the DC must have a 
static IP address.


Yes.  But that has nothing to do with the contents of the /etc/hosts file.


The syntax is:

IP_address canonical_hostname [aliases...]


Agreed.  That's what it should be.  Though I've seen all sorts of failures.


Therefore, in an entry like:

127.0.0.1   localhost host.example.net host

the "host.example.net" and "host" are both entered as aliases, but 
will nevertheless resolve to 127.0.0.1 - which will break the Samba 
AD DC requirement.


Agreed.

The host name and FQDN must resolve to the static IP of the DC on 
the LAN.


Remember, that this also applies to clients, not just DCs.

Since /etc/hosts is parsed from the top, things may work fine when 
the localhost entry is further down the list and further down than 
any other entries acting as AD DNS resolvers - I don't recall testing 
this on Samba to know for sure.


Why are you putting entries for the DNS servers in the /etc/hosts file?

The same syntax won't break a LAMP, or vanilla linux PC, as long as 
the same box is not acting as a DC.


Actually it can.  I've seen it multiple times in the past.

Bind a service to /only/ the LAN IP.  Then have the system try to 
connect to itself.  It will fail because the service isn't listening on 
the loopback IP.


This is (or was) common on web servers that had multiple IP addresses to 
use different TLS certificates before SNI became a viable thing.  Have 
each virtual web server listen on only it's specific IP address.  Have 
the virtual web server for the system's FQDN follow suit for consistency 
reasons.  Then trying to connect to the FQDN would fail if it was an 
alias for 127.0.0.1 or ::1.


See my statement above re. entries for AD DNS resolvers, if these 
are listed in the /etc/hosts file.


You didn't answer my question.

What does the number of DNS servers (configured in /etc/resolv.conf) 
have to do with the contents of the /etc/hosts file?


The /etc/hosts file specifies the LAN IP address(es) of the DC which 
acts as DNS resolver for the AD DNS zones.


No, the /etc/hosts file has nothing to do with how /DNS/ resolution 
operates.


The DC's /etc/resolv.conf shouldn't be pointing to non-AD compatible 
resolvers.


Which has nothing to do with the contents of /etc/hosts.

ACK.  I hope what I've written above better reflects my understanding, 
although it could be factually incorrect.  Other contributors should 
soon put me right.  :-)


I'm wondering if your understanding is that there's a close relationship 
and interaction between the contents of /etc/hosts and /etc/resolv.conf 
as in the former effects the latter.  This is not the case.


/etc/hosts and /etc/resolv.conf are completely independent and can each 
quite happily exist without the other.  You can even run systems without 
one or the other.  Running without both is technically possible, but 
things start to get ... cumbersome.


You can add entries in /etc/hosts for the DNS servers as a convenience. 
But doing so has no influence on how the DNS resolution subsystem 
functions.  The DNS resolution subsystem is driven by options in the 
/etc/resolv.conf file.  And it's done independently of the contents of 
the /etc/hosts file.


Yes, the /etc/hosts file and the /etc/resolv.conf file both have to do 
with name to IP (and IP to name) resolution.  But they are as 
independent of each other as looking up a phone number in the phone book 
vs calling and asking the operator to look it up for you.  They achieve 
the same goal, but do so completely different ways and completely 
independently of each other.


This has been and is an interesting discussion.  However, I'm still no 
closer to learning why the Gentoo handbook wants the local host name 
added to the 127.0.0.1 / ::1 entry in the /etc/hosts file.  Something 
which I believe is wrong and bad 

Re: [gentoo-user] What is the best way forward? - Update 2 - SUCCESS! - CURRENT!!!

2021-03-10 Thread Grant Taylor

On 3/8/21 5:59 PM, antlists wrote:
As I remember, you always had to use eselect to switch versions ... and 
witness all the chaos with python at the moment ...


I don't know.

If you leave things "at the default", doesn't that screw you over when 
python/kernel/gcc etc upgrade and a depclean deletes your original 
default version? Or is that now fixed so you can't mess things up that way?


I've not changed the kernel yet.  I didn't knowingly have any problems 
with Python changes /related/ /to/ gcc.


My Python problem was that my make.profile was pointing to the old 
portage directory that was still back at last March while emerge was 
using a newer incrementally updating version of portage.  --  I consider 
this to be my fault.




--
Grant. . . .
unix || die



Re: [gentoo-user] Why do we add the local host name to the 127.0.0.1 / ::1 entry in the /etc/hosts file?

2021-03-10 Thread Grant Taylor

On 3/10/21 10:43 AM, Mark Knecht wrote:

OK, agreed, completely. localhost must be turned into an IP address.


:-)

I guess what I was thinking was DNS means Server. If it's a Service 
then that's different. I think we're in agreement that if it can find 
the name in /etc/hosts, either actively or cached somewhere in memory, 
then it doesn't have to send anything over a cable to get the answer.


And cable is too generic as I understand that DNS might be on this 
machine.


How about we settle on a UDP and / or TCP connection to a service 
somewhere, local or remote, that translates a name to an IP.  ;-)


Agreed but I suspect if I don't have it in /etc/hosts then I'm unlikely 
to get results that make sense in real time, but that's case buy case.


I think a number of DNS servers are defaulting to resolve A queries for 
"localhost" to 127.0.0.1 and  to ::1.  So, even if it's not in 
/etc/hosts, you'll still probably get the expected resolution.


 I'm approaching my 66th birthday. Deep dark times for me are 
almost certainly more recent dates than for you. ;-)


~chuckle~

I took it as simply a Kerberos setup/config warning. Whoever wrote 
that had an opinion, experience or both and wanted you to know that. I 
didn't read anything more into it.


ACK

By default, Kerberos includes IP restrictions in tickets.  It chooses 
the IP based on what the system returns.  So if the system returns 
127.0.0.1 (or ::1) for the hostname, any tickets that use that IP will 
be non-viable / useless anywhere but localhost.


The author cannot change what "some distros" do but wants to give 
you a fighting chance to get Kerberos working in case you're using 
one. Makes no sense to mention a specific distro because the list 
probably changes over time.


Agreed.

Basically "You'd be wise to look at your /etc/hosts file and fix 
this silly configuration error that some distros do before trying to 
setup Kerberos"


Yep.  Experience has shown that it breaks things.

I'm not a sys admin nor a Gentoo developer or documenter so I cannot 
comment on the manual specifically.


As I no longer run Gentoo - I haven't for about 3 years other than 
one remaining VM seldom used and seldom updated - I'm way out of 
touch with the actual manual but interested in the subject.


Fair enough.



--
Grant. . . .
unix || die



Re: [gentoo-user] Why do we add the local host name to the 127.0.0.1 / ::1 entry in the /etc/hosts file?

2021-03-10 Thread Grant Taylor

On 3/10/21 9:38 AM, Michael wrote:
I always thought the localhost class A addresses were from days of old 
'inter- network' era.  The difference with 127.0.0.1 and a private 
LAN address is the 127.0.0.1 does not reach the data link layer, 
but loops-back at IP layer 3 and responds to any applications on the 
local PC.  So, I understood this to mean it never went through the 
whole network stack, as it does when you ping a remote host.


The 127/8 (formerly called a class A) network is reserved / allocated 
for a host to communicate with itself.


However, /how/ local addresses are used is entirely implementation 
specific.  This goes for both 127.0.0.1 and other addresses bound to 
local network cards.


Linux will not send traffic to the local LAN IP to the NIC either.  But 
that's a /Linux/ /implementation/ detail.  Other OSs, e.g. Windows, 
don't use a loopback adapter for 127.0.0.1.  Instead it's purely a 
software construct.  But that's a /Windows/ /implementation/ detail.


Aside:  Windows (2k and onward) does have a loopback adapter that you 
can optionally install.  However you /can't/ assign 127.0.0.1 (or any 
127/8) to it.  It is meant to be used like Linux uses the dummy adapter.




--
Grant. . . .
unix || die



Re: [gentoo-user] Why do we add the local host name to the 127.0.0.1 / ::1 entry in the /etc/hosts file?

2021-03-10 Thread Grant Taylor

On 3/10/21 9:00 AM, Mark Knecht wrote:
My undocumented (and unsupported by data) opinion is that this 
localhost thing has been around a long, long time - possibly longer 
than Linux for all I know. Check out


Yes, very much so.

TL;DR:  The "localhost" name is a shortcut to say this host that I'm on 
without worrying what the actual host name is or that said name is 
configured to resolve to an IP on this system.


The localhost concept goes back a LONG way in TCP/IP.  I think that it 
even pre-dates TCP/IP, via the NCP protocol.




--
Grant. . . .
unix || die



Re: [gentoo-user] Why do we add the local host name to the 127.0.0.1 / ::1 entry in the /etc/hosts file?

2021-03-10 Thread Grant Taylor

On 3/10/21 8:25 AM, Michael wrote:
I think this is relevant to DNS resolution of/with domain controllers 
and may depend on the AD/DC topology.


I disagree.  Pure Linux in a MIT / Heimdal Kerberos environment has the 
same requirements.  Hence having nothing specific to do with Active 
Directory, much less the AD topology.


The idea is to use the LAN address of the box as the first address 
in /etc/hosts and use 127.0.0.1 as the second address in the file.


Please elaborate.  Because I believe the following qualifies with your 
statement:


192.0.2.1   host.example.net host
127.0.0.1   localhost

Which is effectively the same as the following:

127.0.0.1   localhost
192.0.2.1   host.example.net host

Both of which are different than the following:

192.0.2.1   host.example.net host
127.0.0.1   localhost host.example.net host

Putting host.example.net and host on the 127.0.0.1 line doesn't 
accomplish anything.  And it still suffers from -- what I think is -- 
the poor recommendation that I'm inquiring about.


If more AD/DNS servers exist in the network, then 127.0.0.1 could be 
even further down the list.


https://docs.microsoft.com/en-us/previous-versions/windows/it-pro/windows-server-2008-R2-and-2008/ff807362(v=ws.10)?redirectedfrom=MSDN


What does the number of DNS servers have to do with the contents of the 
/etc/hosts file?


How is the contents of the /etc/hosts file related to the 
/etc/resolv.conf file?


I haven't over-thought this and there may be more to it, but on a 
pure linux environment I expect this would not be a requirement, 
hence the handbook approach.


Apples and bowling balls.  /etc/hosts is not the same concept as 
/etc/resolv.conf.




--
Grant. . . .
unix || die



Re: [gentoo-user] Why do we add the local host name to the 127.0.0.1 / ::1 entry in the /etc/hosts file?

2021-03-10 Thread Grant Taylor

On 3/10/21 6:27 AM, Mark Knecht wrote:

Caveat - not an expert, just my meager understanding:

1) The name 'localhost' is historically for developers who want to 
access their own machine _without_ using DNS.


Eh

Using the /name/ "localhost" still uses name resolution.  It could use 
DNS or it may not.  It /typically/ means the /etc/hosts file.  But it 
could mean DNS or NIS(+) or LDAP or something more esoteric.


IMHO what's special about the "localhost" name in particular is that 
it's an agnostic / anycast method to say the local host that a given 
program is running on without regard to what the actual host name is.


2) By general practice sometime in the deep, dark times 127.0.0.1 was 
accepted for this purpose. There's nothing special about the address.


Deep, dark times?  It's still used every single day across multiple 
platforms, Linux, Unix, Windows, z/OS, i/OS, you name it.


3) I read the original quoted comment in the Kerberos Guide as a warning 
- 'to make matters worse, __SOME__"


What did the warning mean to you?  Because I took it as "be careful, 
your $OS /may/ do this incorrectly".  Where "this" is putting the FQDN 
on the same line as 127.0.0.1 and / or ::1.


4) In my /etc/hosts I do _NOT_ map my machine's name to the same address 
as localhost, avoiding the Kerberos warning:


ACK

I'm grateful for corroboration, but unfortunately that doesn't speak to 
why the Gentoo handbook suggests what it does.




--
Grant. . . .
unix || die



Re: [gentoo-user] Why do we add the local host name to the 127.0.0.1 / ::1 entry in the /etc/hosts file?

2021-03-09 Thread Grant Taylor

On 2/21/21 3:23 PM, Grant Taylor wrote:
Will someone please explain why the Gentoo AMD64 Handbook ~> Gentoo (at 
large) says to add the local host name to the 127.0.0.1 (or ::1) entry 
in the /etc/hosts file?  What was the thought process behind that?


Shameless Bump  --  I'm still interested in understanding the logic 
behind the choice in the Gentoo Handbook.


Additional information.

The Samba Wiki states the following in the Preparing the Installation 
section of the Setting up Samba as an Active Directory Domain Controller 
document.


"The host name and FQDN must not resolve to the 127.0.0.1 IP address or 
any other IP address than the one used on the LAN interface of the DC."


Link - Setting up Samba as an Active Directory Domain Controller - 
Preparing the Installation
 - 
https://wiki.samba.org/index.php/Setting_up_Samba_as_an_Active_Directory_Domain_Controller#Preparing_the_Installation




--
Grant. . . .
unix || die



Re: [gentoo-user] Re: What is the best way forward? - Update 2 - SUCCESS! - CURRENT!!!

2021-03-08 Thread Grant Taylor

On 3/8/21 7:30 PM, John Covici wrote:
At least I didn't have to change profiles and gcc versions several 
times.


I didn't /change/ the profile.  As in it was 17.0 when I started and 
still is 17.0.


I did have to update the make.profile link to point to the same profile 
in the alternate portage directory.


This wouldn't have been an issue if I just re-used the same portage 
directory.



I guess different situations require different methods.


Indeed.



--
Grant. . . .
unix || die



Re: [gentoo-user] What is the best way forward? - Update 2 - SUCCESS! - CURRENT!!!

2021-03-08 Thread Grant Taylor

On 3/8/21 5:35 PM, Neil Bothwick wrote:

Not if you went up a slot, then the old version would still continue to
be used until you ran gcc-config. However, if you were depcleaning at each
step, that would remove the previous slot and you would stay current.


So my overall method, which included depclean, did allow subsequent runs 
to use the new updated GCC.


Thank you for clarifying.


I tend to keep old copies of gcc around until I'm sure things play nicely
with the new version:

% gcc-config -l
  [1] x86_64-pc-linux-gnu-9.3.0
  [2] x86_64-pc-linux-gnu-10.2.0 *


 [1] aarch64-unknown-linux-gnu-9.3.0 * (cyan *)
 [2] x86_64-pc-linux-gnu-10.2.0 * (green *)

The aarch64* came in as part of @openwrt-prerequisites.  I should 
probably remove that as I no longer need it.


Thank you for your input Neil.



--
Grant. . . .
unix || die



Re: [gentoo-user] What is the best way forward? - Update 2 - SUCCESS! - CURRENT!!!

2021-03-08 Thread Grant Taylor

On 3/8/21 4:16 PM, Neil Bothwick wrote:
It would have to be done before the first update, when the repo was 
set to a date just after the last update.


Yes and no.

It really could have been done at any point along the way.

Also, with the git version of the portage repo, I could switch back to 
the branch from any time I wanted to.


You can rephrase that as "I left it at the default", which is an 
acceptable answer :)


*nod*

It means you probably spent a lot of time compile gcc versions only 
to carry on using the old version, but as you said, this wasn't about 
efficiency.


Wouldn't the next execution of gcc, post Emerge & Installation use the 
newly emerged binary?


If not next package in a given emerge run didn't use the new gcc, I 
would fully expect that subsequent emerges would use the new gcc.


You were going to emerge -e @world at the end anyway, which would 
get everything built with the latest toolchain.


Yes.

I have initiated a full system backup. I'll start an `emerge -e @world` 
after that finishes.


I'll actually do the full suite:

1)  emerge -e @world
2)  emerge --depclean --verbose n
3)  emerge @preserved-rebuild
4)  revdep-rebuild

I expect that #3 should be a NoOp and just burn CPU cycles.

I don't know anything else that can be done to make a Gentoo box happier 
(from a software standpoint).


Most of the effort for you was developing the procedure. All the real 
effort was left to the computer.


Exactly!

Well, developing the method /and/ establishing trust therein.


I was thinking of a week max.


I suspect that would be quite safe.



--
Grant. . . .
unix || die



Re: [gentoo-user] Re: What is the best way forward? - Update 2 - SUCCESS! - CURRENT!!!

2021-03-08 Thread Grant Taylor

On 3/8/21 4:03 PM, Grant Edwards wrote:
How do you feel it compares to just installing from scratch while 
preserving whatever config and user data you care about? I've done 
that quite a few times and it usually takes about 2-3 hours for the 
initial install and then overnight to build a desktop environment 
(if one is needed).


I feel like installing from scratch misses a lot of things.  Even 
wholesale overwriting the new /etc with the old /etc is questionable. 
You'd have to make sure that all the same software was installed.


Aside:  I've spent too much time around other SAs that would ""recover a 
down server by doing fresh installs in hours and then spending weeks to 
get everything back to the way that it needed to be verses spending ~18 
hours to restore from tape and have things work the way they were 24 
hours prior.  I also never cared for in place upgrades (installing over 
top of itself) in the Windows world.


I feel *MUCH* /more/ comfortable with what I did than other solutions. 
I trust that this is the same install with patches applied.  I couldn't 
and wouldn't say the same for an installation over the top or fresh 
installation.


Don't get me wrong.  I believe there are places for fresh installations. 
 They usually happen coordinate with new machines and / or new drives 
for me.


After all, I effectively have the same thing that I would have if I had 
done updates over the last year like they should have been done.




--
Grant. . . .
unix || die



Re: [gentoo-user] What is the best way forward? - Update 2 - SUCCESS! - CURRENT!!!

2021-03-08 Thread Grant Taylor

On 3/8/21 3:29 PM, Neil Bothwick wrote:
With hindsight, removing firefox, thunderbird and libreoffice and 
replacing them with their -bin counterparts at the start of the 
process would have saved much time. You could switch back to the 
source options once the system is up to date.


You're probably correct.

However I don't think I would do that even if I had known / thought 
about doing so.  Partially because changing things was questionable at / 
near the start and partially because this was about possibility, not 
efficiency.


How did you manage gcc upgrades, did you run gcc-config manually 
whenever gcc was updated?


Is "I ignored them and let emerge deal with it" count?  I did see gcc 
upgrades along the way.


I don't remember what it was at the start, probably 8. or 
9..  I did see 9.3 somewhere along the way.  gcc -v says that 
10.2.0 is currently installed.


Do you feel it was worth the effort of updating for every day of the 
git history?


I don't know if it was worth the effort or not.  I initially did one day 
at a time while testing the waters and going from theory to some 
practical experience of the method.


Very quickly I used a different version of e (1 or 2) that took the date 
as a parameter.  My command line was calling e with the date derived 
from the d variable and then decrementing the d variable after the e 
function finished.  I.e.


   e $(date +%Y-%m-%d -d "$d days ago"); $((d=$d-1))

I would let that run, deal with any results, then hit the up arrow and 
enter.


I just let that process continue for a while.  Then at some point I 
optimized it into e3 and ran that for a while.  Then I optimized that 
into the while e3; do true; done loop.


But I stuck with single day steps mostly from inertia.  It was working. 
So ... stick with it.


Would a larger increment have saved time, or did you think minimising 
the number of issues to deal with after each emerge was more important?


Maybe.  If anything, it would have saved the time for emerge to process 
all of it's meta data.  Much like an initial emerge vs an emerge 
--resume.  But again, this was about the viability of the process, not 
the efficiency thereof.


I probably could have gone with a week at a time.  I don't know if that 
would have helped or not.  I don't think I would go with more than a 
week with a largely automated process.


I think one month increments probably would be pushing the envelope.  I 
feel like some of the Python changes were 2 or maybe 3 months apart.  So 
with two combined with how you landed, you might cross Python 3.6 w/o 
3.7, to both 3.6 and 3.7, to w/o 3.6 and w/ 3.7 barrier.  That probably 
wouldn't be pretty.


Anyway, glad it worked for you - it's more or less how I would have 
approached it but never had to, so thanks for doing the legwork :)


You're welcome.

Hence the DenverCoder9 comment, for people searching ~> reading the 
mailing list archive in the future.




--
Grant. . . .
unix || die



Re: [gentoo-user] What is the best way forward? - Update 2 - SUCCESS! - CURRENT!!!

2021-03-08 Thread Grant Taylor

On 2/25/21 5:31 PM, Grant Taylor wrote:

10 have git switch to the next day
20 emerge -aDUN @world
30 assess / deal with masked packages
40 goto 10

It /looks/ like things are working.


*TL;DR*

DenverCoder9:  DEAR PEOPLE FROM THE FUTURE ...

This method /does/ work.  I have successfully brought the problem system 
from ~1 year old to ~current (Gentoo portage repo < 24 hours old).


*Speed Bumps*

These were the four things that caused the biggest slow down in this 
process.


1)  Source packages / ebuilds no longer available.
 - I found and downloaded files to DISTDIR.
 - I copied some ebuilds from older versions of portage to my local 
repo.

2)  make.profile not using PORTDIR definition in make.conf.
 - I ran into this while working on October ~ November '20 updates.
3)  PYTHON_TARGETS & PYTHON_SINGLE_TARGET
 - I ran into this after fixing #2.
 - I had to add the following to pull Python 3.6 back in so that 
things would work to add Python 3.7, before allowing the system to 
remove Python 3.6 (again).

  PYTHON_TARGETS="python2_7 python3_6 python3_7"
  PYTHON_SINGLE_TARGET="python3_7"
4)  Firefox & Thunderbird 68 disliking rust ≈ 1.48.
 - I had to give up on retaining version 68 of Firefox and Thunderbird.
- The loss of some important extensions still really hurts.

*How*

The high level process that I used is a very close superset of what I 
hypothesized.


10 have git switch to the next day
20 emerge -DUN @world
21 emerge --depclean --verbose n
22 emerge @preserved-rebuild
23 revdep-rebuild
30 assess / deal with output from steps 20-23
40 goto 10

Steps 21-23 added mid-stream to make comparison to previous message simpler.

All of these steps were in a function, `e3` (see attached file), which 
relied on one variable, `d`, the count of how many days to go backwards 
and set the date (`D`) to that everything should act on.


Aside:  The next version of e3 would probably store `d` in a file and 
subsequently re-load it from said file on each invocation.  Thus 
eliminating the reliance on the environment variable.  I would probably 
store this file in /var/tmp as /tmp and /dev/shm are cleared on boot.


After gaining enough trust in the overall process, I ended up running 
the following while loop:


   while e3; true; done

This allowed the system to stay busy emerging things up to the point 
that something failed and needed attention.


*Setup*

I did a `git clone` of the Gentoo portage repo.  Currently ~6 GB.

I then created the branches in the git repo with the following command 
(from inside of the git repo directory):


   for ((age=1; age<1024; age++)); do eval $(printf 'git log 
--pretty=format:"%%H %%cd" --date=format:%%Y-%%m-%%d\ %%H:%%M:%%S 
--after=%s --before=%s | fgrep -m1 %s' $(date +%Y-%m-%d -d "$(($age + 
1)) days ago") $(date +%Y-%m-%d -d "$(($age - 1)) days ago") $(date 
+%Y-%m-%d -d "$age days ago")) | read hash date time; time git checkout 
-b $date $hash; done


Basically, this command starts at current; `stable`, and finds the first 
(most recent) commit for a given date and creates a branch, and works 
backwards for however many days configured; 1024 in the example.


*Miscellany*

I did `emerge -e @world` 3~5 times throughout the process just to make 
sure that everything was consistent.  I will do this once more tomorrow 
after a full backup runs tonight.


I did end up removing a small list of packages that were blocking emerge 
in one way or another.  --  I decided that removing them to allow emerge 
to complete on it's own accord was more expedient than fighting them at 
the time.  I will re-add them as necessary.


 - net-firewall/nftables
 - net-fs/ncpfs
 - media-gfx/gimp
 - dev-python/pycairo
 - dev-python/fido2
 - net-analyzer/scapy
 - app-crypt/yubikey-manager

Some of the packages were subsequently pulled back in.

I did run into a bug with app-misc/pax-utils where I needed to add 
"-seccomp" for the package to be able to move forward.


I also did the /usr/portage to /var/db/repos/gentoo et al. migration.

"repo" can be ambiguous when there talking about both "Gentoo portage 
repo" and "git repo".  Especially when the latter is managing the former.


The following packages take what seems like  F O R E V E R  to emerge:

 - gcc
 - rust
 - Firefox
 - Thunderbird

Link - xkcd - Wisdom of the Ancients (a.k.a. DenverCoder9)
 - https://xkcd.com/979/

*Summary*

Yes, there are probably faster and / or more efficient processes to get 
a Gentoo system that's ~1 year behind caught up to current.  But I did 
learn some things along the way.  --  I tried to outline the toe 
stubbers so others can avoid them.


Ultimately, I believe I have done in the last 11 days what would have 
been done over the course of the last ~year.  Even 11 days is longer 
than necessary as I started with the while loop after getting to

Re: [gentoo-user] zfs repair needed (due to fingers being faster than brain)

2021-03-01 Thread Grant Taylor

On 3/1/21 3:25 PM, John Blinka wrote:

HI, Gentooers!


Hi,

So, I typed dd if=/dev/zero of=/dev/sd, and despite 
hitting ctrl-c quite quickly, zeroed out some portion of the initial 
part of a disk.  Which did this to my zfs raidz3 array:


OOPS!!!


 NAME STATE READ WRITE CKSUM
 zfs  DEGRADED 0 0 0
   raidz3-0   DEGRADED 0 0 0
 ata-HGST_HUS724030ALE640_PK1234P8JJJVKP  ONLINE   0 0 0
 ata-HGST_HUS724030ALE640_PK1234P8JJP3AP  ONLINE   0 0 0
 ata-ST4000NM0033-9ZM170_Z1Z80P4C ONLINE   0 0 0
 ata-ST4000NM0033-9ZM170_Z1ZAZ8F1 ONLINE   0 0 0
 14296253848142792483 UNAVAIL  0 0
0  was /dev/disk/by-id/ata-ST4000NM0033-9ZM170_Z1ZAZDJ0-part1
 ata-ST4000NM0033-9ZM170_Z1Z80KG0 ONLINE   0 0 0


Okay.  So the pool is online and the data is accessible.  That's 
actually better than I originally thought.  --  I thought you had 
accidentally damaged part of the ZFS partition that existed on a single 
disk.  --  I've been able to repair this with minimal data loss (zeros) 
with Oracle's help on Solaris in the past.


Aside:  My understanding is that ZFS stores multiple copies of it's 
metadata on the disk (assuming single disk) and that it is possible to 
recover a pool if any one (or maybe two for consistency checks) are 
viable.  Though doing so is further into the weeds than you normally 
want to be.


Could have been worse.  I do have backups, and it is raid3, so all I've 
injured is my pride, but I do want to fix things.I'd appreciate 
some guidance before I attempt doing this - I have no experience at 
it myself.


First, your pool / it's raidz3 is only 'DEGRADED', which means that the 
data is still accessible.  'OFFLINE' would be more problematic.



The steps I envision are

1) zpool offline zfs 14296253848142792483 (What's that number?)


I'm guessing it's an internal ZFS serial number.  You will probably need 
to reference it.


I see no reason to take the pool offline.


2) do something to repair the damaged disk


I don't think you need to do anything at the individual disk level yet.


3) zpool online zfs 


I think you can fix this with the pool online.

Right now, the device name for the damaged disk is /dev/sda. 
Gdisk says this about it:


Caution: invalid main GPT header,


This is to be expected.


but valid backup; regenerating main header from backup!


This looks promising.


Warning: Invalid CRC on main header data; loaded backup partition table.
Warning! Main and backup partition tables differ! Use the 'c' and 'e' options
on the recovery & transformation menu to examine the two tables.


I'm assuming that the main partition table is at the start of the disk 
and that it's what got wiped out.


So I'd think that you can look at the 'c' and 'e' options on the 
recovery & transformation menu for options to repair the main partition 
table.



Warning! Main partition table CRC mismatch! Loaded backup partition table
instead of main partition table!


I know.  Thank you for using the backup partition table.


Warning! One or more CRCs don't match. You should repair the disk!


I'm guessing that this is a direct result of the dd oops.  I would want 
more evidence to support it being a larger problem.


The CRC may be calculated over a partially zeroed chunk of disk.  (Chunk 
because I don't know what term is best here and I want to avoid implying 
anything specific or incorrectly.)



Main header: ERROR
Backup header: OK
Main partition table: ERROR
Backup partition table: OK


ACK


Partition table scan:
   MBR: not present
   BSD: not present
   APM: not present
   GPT: damaged

Found invalid MBR and corrupt GPT. What do you want to do? (Using the
GPT MAY permit recovery of GPT data.)
  1 - Use current GPT
  2 - Create blank GPT

Your answer: ( I haven't given one yet)


I'd assume #1, Use current GPT.


I'm not exactly sure what this is telling me.  But I'm guessing it
means that the main partition table is gone, but there's a good
backup.


That's my interpretation too.

It jives with the description of what happened.


In addition, some, but not all disk id info is gone:
1) /dev/disk/by-id still shows ata-ST4000NM0033-9ZM170_Z1ZAZDJ0 
(the damaged disk) but none of its former partitions


The disk ID still being there may be a symptom / side effect of when 
udev creates the links.  I would expect it to not be there post-reboot.


Well, maybe.  The disk serial number is independent of any data on the disk.

Partitions by ID would probably be gone post reboot (or eject and 
re-insertion).


2) /dev/disk/by-partlabel shows entries for the undamaged disks in 
the pool, but not the damaged one


Okay.  That means that udev is recognizing the change faster than I 
would have expected.


That 

[gentoo-user] Is the "Messages for package ..." output from emerge logged somewhere?

2021-02-28 Thread Grant Taylor

Hi,

Is the "Messages for package ..." output from emerge logged somewhere?

I'd like to re-read the "Messages for package ..." output from emerge 
after the fact.  Is there a concise collection of that somewhere?  Or do 
I have to pilfer through logs of each and every package to find it?


I'm hoping that the "Messages for package ..." output that shows up at 
the end of an emerge (e.g. @system or @world) is conveniently available.




--
Grant. . . .
unix || die



Re: [gentoo-user] What is the best way forward? - Update 1

2021-02-26 Thread Grant Taylor

On 2/26/21 11:55 PM, Arve Barsnes wrote:
I'm not sure what you're saying here, but the ebuild files of the 
installed packages are in /var/db/pkg


Hum.

Today I Learned...

The ebuild and what looks like additional metadata files are in the 
/var/db/pkg directory tree.  But the source files aren't in the tree. 
At least not for the example package I looked at.


Find across the /var/db/pkg tree does not find any tar files either.



--
Grant. . . .
unix || die



Re: [gentoo-user] What is the best way forward? - Update 1

2021-02-26 Thread Grant Taylor

On 2/26/21 12:50 PM, Neil Bothwick wrote:

Ah yes, I hadn't thought about the mirrors being too up to date.


There's also issue with older packages being installed.  E.g. I have an 
older kernel source (4.14.127) that I'm keeping around for various 
reasons.  I've found that the Gentoo repo / portage removes older ebuild 
files.  So, I've had to source them and add them to my local repo.


Even putting the old ebuilds into my local repo is entertaining because 
repoman / ebuild fail to download the source files b/c the mirrors have 
updated.  Hence finding files and putting them in distfiles.



If the packages are installed, the ebuilds are in var/db/pkg.


The package (distribution files) for the version that is installed are 
in distfiles.  But that does little for an ebuild that's looking for a 
newer version that's no longer on the expected mirrors.


The kernel shouldn't be a problem, I would expect you to be able 
to jump straight to the current version, although you may prefer to 
recompile it once you are fully up to date on the toolchain.


This system has both Open vSwitch and OpenZFS, so I'm loath to allow 
automation to change the kernel version.  That will be a problem for 
future me to deal with manually.




--
Grant. . . .
unix || die



Re: [gentoo-user] What is the best way forward? - Update 1

2021-02-26 Thread Grant Taylor

On 2/25/21 5:31 PM, Grant Taylor wrote:

10 have git switch to the next day
20 emerge -aDUN @world
30 assess / deal with masked packages
40 goto 10

It /looks/ like things are working.


This method is working.

I have managed to successfully update from 2020-03-24 to 2020-05-29 in 
one day increments.


I'm starting to see some oddities, like a given version of gimp blocking 
itself.  Unmerging and re-emerging solved that.  So I'm going to let the 
system spend 12 hours and do an emerge -DUNe @world && emerge --depclean 
--verbose n && revdep-rebuild, reboot, and continue with 2020-06-01.


I have run into a few problems where emerge can't download files.  So 
I'm finding them online, downloading them, and saving them to 
/usr/portage/distfiles.


I have had a couple things (really old kernel source still installed) 
that I needed to find the ebuild files for.  I added them to my local 
repository.


I did have an ancient package, ncpfs, that was blocking things.  Since I 
didn't need it, I have unmerged it.  If / when I need it in the future, 
I'll deal with it then.




--
Grant. . . .
unix || die



Re: [gentoo-user] What is the best way forward?

2021-02-25 Thread Grant Taylor

On 2/24/21 9:29 PM, Grant Taylor wrote:
I'm currently doing an "emerge -DUNe @system" on the restore of 
/usr/portage (typical PORTDIR) from prior to messing with things today.


The system is now stable with a full -DUNe @system.

   emerge -DUNe @system
   reboot
   emerge -DUNe @world && emerge --depclean --verbose n && revdep-rebuild

I've got multiple GB of git data.  It looks like there are ~568 thousand 
commits between March 24th last year and now.  Once that's good, and I'm 
back at a stable place, I'll try changing PORTDIR to be the git repo and 
telling git to switch to the commit that's from March 25th.  Then I'll 
see if anything needs to be updated, doing so as necessary.  Then I'll 
leap frog a week at a time seeing what needs to be updated, doing so as 
necessary.  /Hopefully/ I can slowly walk forward.  Time will tell.


I was able to extract the last commit for every day between now and 
2020-03-24 and make a branch for it.


10 have git switch to the next day
20 emerge -aDUN @world
30 assess / deal with masked packages
40 goto 10

It /looks/ like things are working.

Yes, emerge is spending a LOT of time mulling over things.  Many days 
have been "Nothing to merge; quitting."


If I can slowly make my way forward in time via git commit points, I 
/think/ that I /should/ be able to deal with profile and / or compiler 
and / or glibc changes just like I would have X number of months ago.  I 
/think/!


One added advantage of doing this day by day is that when I do get to 
the big changes, things should be fairly clean.  Thus hopefully 
simplifying the big changes.




--
Grant. . . .
unix || die



Re: [gentoo-user] What is the best way forward?

2021-02-25 Thread Grant Taylor

On 2/25/21 2:51 AM, Michael wrote:

It would probably be better even with a lot of customizations.  ;-)


Please elaborate on what "better" means in this case.  I'm thinking that 
you might be meaning "faster" and / or "easier" (as in less effort).



At least it /should/ be better in terms of time and effort spent.


Maybe.


A reinstall in this context is not a wholesale replace.


~blink~

It implies obtaining the latest Stage 3 archive from a mirror, 
but retaining part of your current installation.  Your /home, /etc, 
/var/lib/portage/world, plus any databases e.g. in /var/lib/mysql/ 
and your kernel config will be retained from your existing system and 
will not be replaced.  Back these up first along with any particular 
customizations you have made, before you untar Stage 3, so you can 
restore them.


Ah.  You seem to be talking about what I would call an "in place 
upgrade" for Windows.  As in stalling n over top of n-1 or n-2.  That's 
definitely less disruptive than I was thinking.  I was thinking that 
fdisk and / or mkfs would be involved.


Then rsync portage, update all your @world packages and build a new 
kernel (make oldconfig).  Spend some time merging existing application 
config files with etc-update to make them compatible with the latest 
versions of these packages, reboot and hopefully that should be all 
there is to it.


I may end up /needing/ to go that route.  For the moment, I'm going to 
try the incremental updates.


Yes, it would have been, but what is the benefit of updating multiple 
packages many times over, instead of doing it just once?


In some ways, this is a learning experience.  As in it's a proof of concept.

The computer in question spends 2/3 of it's life doing nothing but 
idling a few programs.  So, it spending time compiling and producing 
heat is not a bad thing in this case.  Especially when there's 10" of 
snow on the ground.  ;-)




--
Grant. . . .
unix || die



Re: [gentoo-user] Why do we add the local host name to the 127.0.0.1 / ::1 entry in the /etc/hosts file?

2021-02-24 Thread Grant Taylor

On 2/25/21 12:02 AM, Arve Barsnes wrote:

I don't think that was the question Peter sought to answer, but rather
that 'hostname -i' returns the loopback address either way.


But 'hostname -i' /doesn't/ return the 127.0.0.1 or ::1 if the hostname 
isn't on lines with 127.0.0.1 or ::1.



Might still defy logic depending on the way you look at it, but that's
a different question.


Hence why I'm seeking the logic behind what was done.



--
Grant. . . .
unix || die



Re: [gentoo-user] What is the best way forward?

2021-02-24 Thread Grant Taylor

On 2/24/21 9:16 PM, John Covici wrote:

The portdir has to be the one gotten from git, not rsync,


ACK

I'm currently doing an "emerge -DUNe @system" on the restore of 
/usr/portage (typical PORTDIR) from prior to messing with things today.


I've got multiple GB of git data.  It looks like there are ~568 thousand 
commits between March 24th last year and now.  Once that's good, and I'm 
back at a stable place, I'll try changing PORTDIR to be the git repo and 
telling git to switch to the commit that's from March 25th.  Then I'll 
see if anything needs to be updated, doing so as necessary.  Then I'll 
leap frog a week at a time seeing what needs to be updated, doing so as 
necessary.  /Hopefully/ I can slowly walk forward.  Time will tell.


and remember I think there was a major profile change during that 
time period along with changes in the C compiler.


If I can slowly make my way forward in time via git commit points, I 
/think/ that I /should/ be able to deal with profile and / or compiler 
and / or glibc changes just like I would have X number of months ago.  I 
/think/!


Unless you have a lot of customizations, reinstall would be much 
better.


I'd really rather not do that.  I'm more likely to leave this system as 
it is and plan on upgrading it some time in '21.  There's considerably 
more to it than I want to wholesale replace.


Besides, wouldn't each of the incremental processes over the last year 
have been possible?  ;-)




--
Grant. . . .
unix || die



Re: [gentoo-user] Why do we add the local host name to the 127.0.0.1 / ::1 entry in the /etc/hosts file?

2021-02-24 Thread Grant Taylor

On 2/24/21 7:37 PM, Peter Humphrey wrote:

Isn't it a matter of simple logic?


No.  It is not.  Consider my question to be calling the logic into 
question.  Or at least asking for what the logic was to be explained.


The loopback address is just that: the machine talking to itself, with 
no reference to the outside world. Whereas, while talking to other 
machines on the network its address is that of the interface. There's 
no connection between those two.


That doesn't explain /why/ the local host name is added to the line 
containing 127.0.0.1 and / or ::1.


Remember, that /all/ traffic to a local IP, of any interface, runs 
through the loopback interface.


Try pinging your Ethernet / WiFi IP address in one window and then 
shutting the lo interface down.  The pings will stop responding.  Then 
they will start again when you turn the lo interface back up.


So, even if you do (questionably) connect to the IP address of the 
Ethernet / WiFi adapter instead of 127.0.0.1 / ::1 you are still going 
through the lo interface.


So, again, will someone please explain why the Gentoo AMD64 Handbook ~> 
Gentoo (at large) says to add the local host name to the 127.0.0.1 (or 
::1) entry in the /etc/hosts file?  What was the thought process behind 
that?




--
Grant. . . .
unix || die



Re: [gentoo-user] What is the best way forward?

2021-02-24 Thread Grant Taylor

On 2/24/21 6:48 PM, John Covici wrote:
What you could try to do, if you are syncing using git, is to roll 
it back to those dates by checking out a commit each time and doing 
an update.  I don't guarantee it would work, but its worth a shot, 
otherwise reinstall time.


I hit send too soon.

Thank you for the reply John.



--
Grant. . . .
unix || die



Re: [gentoo-user] What is the best way forward?

2021-02-24 Thread Grant Taylor

On 2/24/21 6:48 PM, John Covici wrote:
What you could try to do, if you are syncing using git, is to roll 
it back to those dates by checking out a commit each time and doing 
an update.  I don't guarantee it would work, but its worth a shot, 
otherwise reinstall time.


And what if I was still using rsync?

I'm currently doing a git clone of the gentoo-mirror/gentoo.git 
repository in another directory.


Once that finishes, I'll see if I can list the commits in it from March 
and see if I can work my way forward.


Does it /actually/ matter how I get the portage repository as long as 
it's one from a time close enough that "emerge -DUN @world" will succeed 
in small increments?  Even if I have to automate stepping through 
hundreds of them.


My understanding is that emerge works against the contents of the 
PORTDIR, usually /usr/portage.  So as long as I get ... let's call it 
... a compatible version  That's my hope anyway.




--
Grant. . . .
unix || die



[gentoo-user] What is the best way forward?

2021-02-24 Thread Grant Taylor
I need to update a system that hasn't been updated in 337 days (March 
24th 2020.  --  Life has been ... trying.


What is the best way forward?

It seems as if there have been a lot of changes in the interim; glibc, 
Python 2.7 being deprecated, default Python going to 3.7(?), other 
breaking changes


Is there a way that I can sync portage to something from April, May, or 
even June of 2020, do a full update (including "-DUNe @world")? 
Iterating through multiple rounds to get current?


Any help would be appreciated.



--
Grant. . . .
unix || die



[gentoo-user] Why do we add the local host name to the 127.0.0.1 / ::1 entry in the /etc/hosts file?

2021-02-21 Thread Grant Taylor

Hi,

I'm reading Kerberos - The Definitive Guide[1] and it makes the 
following comment:


And to make matters worse, some Unix systems map their own hostname 
to 127.0.0.1 (the loopback IP address).


This makes me think that the local host name /shouldn't/ be included in 
the 127.0.0.1 (or ::1) entry in the /etc/hosts file.


However, according to the Gentoo AMD64 Handbook[2], we are supposed to 
add the local host name to the 127.0.0.1 (and ::1) entry in the 
/etc/hosts file.


Will someone please explain why the Gentoo AMD64 Handbook ~> Gentoo (at 
large) says to add the local host name to the 127.0.0.1 (or ::1) entry 
in the /etc/hosts file?  What was the thought process behind that?


Incidentally, adding the local host name to the 127.0.0.1 (or ::1) entry 
in the /etc/hosts file causes "hostname -i" to return 127.0.0.1 instead 
of the IP address bound to the network interface.


Thank you for any input you can provide.

[1] Kerberos: The Definitive Guide (p. 109). O'Reilly Media. Kindle Edition.
[2] 
https://wiki.gentoo.org/wiki/Handbook:AMD64/Installation/System#The_hosts_file




--
Grant. . . .
unix || die



Re: [gentoo-user] why both /usr/lib and /usr/lib64 on a 64bit system?

2021-02-14 Thread Grant Taylor

On 2/14/21 10:51 AM, Jack wrote:

I don't think you can completely get rid of it.


My (long term) desire is to do away with /lib32 and /lib64, ultimately 
only using /lib.  Likewise for the other library directories in /usr or 
wherever they are.  I don't see a need for the specific bit variants in 
the future.




--
Grant. . . .
unix || die



Re: [gentoo-user] Re: TCP port 445

2021-02-14 Thread Grant Taylor

On 2/14/21 11:26 AM, Michael wrote:

These are the services using port 445:

445 TCP SMB Fax Service
445 TCP SMB Print Spooler
445 TCP SMB Server
445 TCP SMB Remote Procedure Call Locator
445 TCP SMB Distributed File System Namespaces
445 TCP SMB Distributed File System Replication
445 TCP SMB License Logging Service
445 TCP SMB Net Logon


ACK

Right, it isn't.  My bad.  FS Namespaces mapping uses port 445, 
a different function - see URL at the bottom.


Ah.  Maybe a term collision between "Namespaces" and "name (to IP) 
resolution".


All I found from a random search in the interwebs, is the following 
link.  Port 445 is used for file/printer data sharing as discussed. 
It is also used for 'Distributed File System Namespaces' across 
different domains - but this is not DNS-IP resolution.


Yep.  DFS, being a pseudo / logical server, needs to be found just like 
actual servers.  That makes complete sense.  \\DFS\LogicalServer is 
really at \\CorpServer\ShareA and \\BranchServer\ShareB, take your pick. 
 By the way, CorpServer is at the corporate HQ through the WAN link and 
BranchServer is at the same branch office as you.


https://docs.microsoft.com/en-us/troubleshoot/windows-server/networking/ 
service-overview-and-network-port-requirements


Thank you for the link.  I will read it in the coming days as time permits.



--
Grant. . . .
unix || die



[gentoo-user] Re: TCP port 445

2021-02-14 Thread Grant Taylor

On 2/14/21 4:42 AM, Michael wrote:
You are probably right.  My knowledge of MSWindows environments has 
been on a need to know basis, when I can't avoid it.  ;-)


Fair enough.

I've managed to avoid more Windows in the last 10 years than I could in 
the previous 10 years.


Active Directory Domain Services use port 445 to store and communicate 
domain names, IP addresses, list of services available, etc. for 
a domain.


TCP port 445 is not directly related to AD DS.  Sure, AD DS /uses/ TCP 
port 445, but so do computers that are not participating in AD DS.


TCP port 445 is the port that SMB runs over natively.  Historically, SMB 
would use TCP ports 137, 138, and 139 when it was still using the 
NetBIOS over TCP (NBT).


I suppose initial name to IP resolution happens over port 53, or UDP 
5355 if there is no local DNS resolver configured and the MSWindows 
setup uses LLMNR.  Microsoft- ds listens on TCP 445 and communicates 
stored DNS information to clients regarding domain names, domain 
controller(s) and services.  I don't know to what extent microsoft-ds 
is integrated with the basic TCP-IP DNS service, but expect there 
would be some logical linkage in there.


I do not recall seeing anything about name resolution running over TCP 
port 445.


...

Even the venerable WINS (NetBIOS Name Service) ran over TCP port 137.

If you have any authoritative information that you can point to where 
name resolution, of any type, runs over TCP port 445, please share it as 
I'd like to read it.


Anyhow, I think the OPs problem is down to the wrong CUPS driver used 
in remote client(s).


Agreed.



--
Grant. . . .
unix || die



Re: [gentoo-user] why both /usr/lib and /usr/lib64 on a 64bit system?

2021-02-13 Thread Grant Taylor

On 2/13/21 9:38 PM, Dan Egli wrote:
Frankly, I find there's still too many programs that want 32bit 
libraries to go full no-multilib.


Are the programs that you're referring to things that are installed 
through something other than emerge?


I'd naively assume that anything emerged on a system with no-multilib 
would be 64-bit.


What am I missing?



--
Grant. . . .
unix || die



Re: [gentoo-user] Sharing printers via Cups

2021-02-13 Thread Grant Taylor

On 2/12/21 4:00 AM, Michael wrote:
Samba uses the native MSWindows 'Active Directory Domain Services' 
over TCP port 445 to resolve IP addresses when printing over Samba.


I question the veracity of this.

My understanding is that name to ip resolution, particularly in Active 
Directory environments, is that it is all DNS based.




--
Grant. . . .
unix || die



Re: [gentoo-user] spam - different IP's

2021-02-05 Thread Grant Taylor

On 2/5/21 6:57 AM, William Kenworthy wrote:

Use fail2ban to target active abusers using your logs. (recommended)


I've had extremely good luck using Fail2Ban in a distributed 
configuration* such that when one of my servers bans an IP, my other 
servers also (almost) immediately ban the same IP.


*I'm using Fail2Ban's (null / reject) "route" option.  I have BGP 
sessions between my servers synchronizing the banned routes.


Leverage the cloud with something like: 
http://iplists.firehol.org/?ipset=firehol_level1 (loaded to shorewall 
with ipset:hash) to preemptively ban via blacklists - recommended. 
There are many good blacklists out there - this one is a meta-list 
and has fast and responsive updates.


That's an option.

I personally have some trouble swallowing the pill that is other 
people's ban lists.  --  It's one thing with adding to a spam score. 
It's another when IPs are out and out blocked.


Aside:  Make use of Fail2Ban's ignore feature to white list (or ignore 
problems from) known good IPs.


Snort (in IDS mode triggering a fail2ban rule) is a bit heavier 
resource-wise but quite useful.  Snort in IPS mode is better, but it 
can impact throughput. (if you are commercial, consider a licence to 
get the latest rules as soon as they are created/needed.)


Another option in the same vein is to use the IPTables variants of the 
Snort rules.




--
Grant. . . .
unix || die



Re: [gentoo-user] Minimal world file.

2021-02-03 Thread Grant Taylor

On 2/3/21 2:42 PM, Matt Connell (Gmail) wrote:
I did.  Sorry for the misinterpretation.  Not familiar with 
debootstrap.


No problem.  That's why I clarified.

The minimum required is probably just the stage3, plus a kernel package 
and a bootloader of some kind.


I'd like to do an old school stage1 or stage2 install / emerge of the 
root system if possible.


The method that tastytea mentioned seems quite interesting.  I'm giving 
it a try now in a scratch directory to see what all it builds.


The kernel will actually come from a User Mode Linux binary on the host 
system.  Launching that binary by hand also accounts for the boot loader.


The stage3 tarball is very minimal as it is.  Of course one man's 
minimal is another man's bloat... so you could probably trim some 
things like ./usr/share


Perhaps I should take another look at the stage3 tarball.  I've clearly 
learned that some of my other assumptions in this thread were wrong / 
sub-par.  So it is only logical that the one about the stage3 is also 
wrong / sub-par.


Same.  I only mentioned that because, as I already admitted, I was 
describing how to get as close to a full Ubuntu as possible.


Fair enough.  I thank you for your help based on your understanding.



--
Grant. . . .
unix || die



Re: [gentoo-user] Minimal world file.

2021-02-03 Thread Grant Taylor

On 2/3/21 1:48 PM, tastytea wrote:
You could install Gentoo into a directory without the build tools, 
but you would have to install packages and update them from a full 
Gentoo installation outside that directory. I've used that technique 
in my Docker experiments.[1]


emerge --root=/workdir sys-apps/busybox will install busybox and all 
build-time dependencies into /workdir.


Intriguing.

Thank you for that pointer.

That actually is close to what I'm messing with.  UML+NFS root

I could probably hack up with something where calls to emerge actually 
reach out to the full host and runs emerge with the proper --root.


[1] 



$ReadingList++



--
Grant. . . .
unix || die



Re: [gentoo-user] Minimal world file.

2021-02-03 Thread Grant Taylor

On 2/3/21 2:21 PM, Matt Connell (Gmail) wrote:
Probably selecting the "default/linux/amd64/17.1/desktop/gnome/systemd" 
profile would get you the closest to start with.


I hit send too soon.

Based on the new information, I suspect I actually want 
"default/linux/amd64/17.1".  (Or whatever is current at the time.)


Thank you for the help.



--
Grant. . . .
unix || die



Re: [gentoo-user] Minimal world file.

2021-02-03 Thread Grant Taylor

On 2/3/21 2:21 PM, Matt Connell (Gmail) wrote:
@system depends on your profile.  So depending on what profile you 
select, you'll have a different set of implicitly selected packages.


The light bulb is starting to glow.


To answer your original question...


Probably selecting the "default/linux/amd64/17.1/desktop/gnome/systemd" 
profile would get you the closest to start with.  Of course it 
won't automatically select every package that Ubuntubian ships with, 
but that should only require you to add a couple of meta packages 
(like gnome- base/gnome-extra-apps for example) 


I'm thinking we might have different ideas of what debootstrap does.  Or 
perhaps that you thought I meant a fuller Debian / Ubuntu system.


I'm looking for the absolute minimum required for a Gentoo installation. 
 (Preferably one that can manage it's own packages and not depend on 
another system.)


I would also VERY MUCH like to stay as far away from systemd et al. as 
possible.




--
Grant. . . .
unix || die



Re: [gentoo-user] Minimal world file.

2021-02-03 Thread Grant Taylor

On 2/3/21 1:29 PM, Dale wrote:
If I recall correctly, the world file from a stage3 tarball is empty. 
It only has the packages you want installed added there.


You and Arve are correct.


Are you thinking about the system packages instead of the world file?


Yes.  That's what I meant.  Thank you for correcting me.


If so, I think it is as small as it can be already.


That is / was my understanding as well.  But I was not certain.  Hence 
why I asked.


After all, a stage3 tarball can't even boot as it has no kernel, 
boot loader mechanism or anything.  It's only enough that you can 
build from to suite your needs.


I largely agree.  But I thought there were also other binaries included 
that aren't strictly needed.




--
Grant. . . .
unix || die



[gentoo-user] Minimal world file.

2021-02-03 Thread Grant Taylor

This may be a silly question, but I don't know, so I'm going to ask.

What is the minimal world file to be somewhat conceptually similar to a 
debootstrap install of Debian / Ubuntu?


Is the world file that ships with stage3 the smallest it can be?  Or are 
there things that can safely be removed?




--
Grant. . . .
unix || die



Re: [gentoo-user] Re: Bind to 127.0.0.N for any N

2021-01-29 Thread Grant Taylor

On 1/29/21 6:37 AM, Grant Edwards wrote:

My brain knows that. My fingers only partially so.


I *completely* understand.

I now manage to use 'ip addr' instead of ifconfig _most_ of the 
time. I still almost always use 'route' instead of of 'ip route'. I 
figure in another 20 years, I will have managed a complete transition.


Interestingly enough, routing is one of the things that pushed me to 
using iproute2.  Specifically things related to policy based routing 
(PBR) and multi-path routing.  It's my understanding that the 
traditional route command can't handle either of these.



And I didn't even know computer museums were hiring.


Nope.

It's just personal hobbies.


:)

It can be a bit disorienting when I see an unfamiliar message signed 
by Grant.


Yep.

More than once I've seen a message from "Grant" and thought "but I 
didn't write...oh!".




--
Grant. . . .
unix || die



Re: [gentoo-user] Re: Bind to 127.0.0.N for any N

2021-01-28 Thread Grant Taylor

On 1/28/21 7:09 PM, Grant Edwards wrote:
I think that's probably right. I had never used the 'ip route' 
command like that and was unaware that route existed.


*nod*

iproute2 has supplanted the venerable net-tools (or whatever it's 
called); ifconfig, route, netstat, etc.


I sort of put pressure on my self to start using them 20 years ago, and 
largely failed.  It wasn't until about 5-10 years ago when I started 
doing things with ip that couldn't be done with other older commands 
that I started succeeding in migrating over to iproute2 for 90% of what 
I do.


Admittedly, I still periodically find myself using ifconfig for quick 
status.  All things I can get from ip, but not as readily handy.


Ironically, I've found myself doing / planning to do things within the 
last six months that iproute2 can't / won't do; DECnet, IPX, and AX.25/ROSE.



Yes, that's correct. [I just tested it]

Also correct.


Thank you for confirming.

P.S. I tip my hat at your name.  ;-)



--
Grant. . . .
unix || die



Re: [gentoo-user] Bind to 127.0.0.N for any N

2021-01-28 Thread Grant Taylor

On 1/28/21 5:38 PM, Grant Edwards wrote:

I've just recently realized something about the "lo" interface.


I don't think this is as much about the interface as it is the routes 
that are created.  (More below.)


You can bind a socket to any 127.0.0.N address, even though only 
127.0.0.1/8 is configured in /etc/config/net, and "ip addr" only shows 
127.0.0.1/8 for that interface.


Yes.  But for specific reasons. (...)

In the past, when I wanted to use other 127.0.0.N address, I 
always added them to /etc/config/net. The last time, I forget to do 
that. Later, I realized it was working anyway. I've since removed 
all of the extra "lo" addresses from /etc/config/net, and everything 
still works.


Because of a very special route.


Apparently "lo" is special.

Perhaps I don't even need to have 127.0.0.1/8 listed in 
/etc/config/net...


I think that you still want 127.0.0.1 in /etc/config/net even if only to 
bring the interface up (a la 'ip link set dev lo up', sans IP).


I believe the ""magic that is allowing this to work is one of the four 
following routes:


# ip route show table local | grep 127.0.0 | nl
 1  broadcast 127.0.0.0 dev lo proto kernel scope link src 127.0.0.1
 2  local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1
 3  local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1
 4	broadcast 127.255.255.255 dev lo proto kernel scope link src 
127.0.0.1


Lines 1, 3, and 4, are typical routes.  You should have something 
similar for other IPs and devices.


But line 2 is very special.  Notice how it's assigning the entire 127/8 
to the lo device.


Reformatting the route with some white space makes it somewhat more obvious.

 2  local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1
 3  local 127.0.0.1   dev lo proto kernel scope host src 127.0.0.1

#3 is a more typical /host/ route.
#2 is a less typical /net/ route.

#2 actually tells the kernel that anything and everything in the 127/8 
destination network can be reached directly via the lo adapter.


This network route is more efficient than having multiple host routes to 
cover some portion of the same IP space.


My understanding -- which may be wrong, and please correct me if you 
think it is -- is that this special route (#2) is how the kernel sends 
the entire 127/8 network to the lo adapter, even if the IP addresses 
aren't bound to the adapter.


Now, as for things receiving the connections, I think it is highly 
dependent on if the thing is listening to 0.0.0.0 or specific IP 
addresses.  Because if it's listening to 0.0.0.0, I think it will 
happily serve connections to other addresses in 127/8.  If it's 
listening to explicitly 127.0.0.1, then it likely will not serve 
connections to other addresses in 127/8.


I believe the same technique can be applied to other addresses outside 
of the 127/8 network.  Though it's much less often done.  You'd most 
likely see this with a service that wants to serve for an entire /24; 
e.g. 192.0.2.0/24 while listening to 0.0.0.0.


Admittely it's been a while since I last delt with this, so I could be 
mis-remembering.  But I think the special route, #2, is at the root of 
what you're asking about.


Again, I believe you do want the 127.0.0.1 in /etc/config/net to 
actually bring the interface up.  You probably don't even need to bind 
an IP to it.  I think the kernel does the 127/8 automatically /if/ the 
interface is simply up, a la 'ip link set dev lo up'.




--
Grant. . . .
unix || die



Re: [gentoo-user] network bonding in gentoo/openrc

2021-01-18 Thread Grant Taylor

On 1/17/21 11:32 PM, William Kenworthy wrote:

Hi all,


Hi,

how can I add/make active an interface that's to be part of a 
bonded connection without rebooting/restarting the bond?


Does the following work?

   ip link set dev eth2 master bond0

That's from memory without much caffeine.  So check the syntax.



--
Grant. . . .
unix || die



Re: [gentoo-user] Re: Console scrollback

2021-01-14 Thread Grant Taylor

On 1/13/21 6:25 PM, Grant Edwards wrote:
Some of the above are shadowed by readline or by bash in emacs mode, 
but the tty driver uses more than a few control keys.


Thank you for the clarification / additional information.



--
Grant. . . .
unix || die



Re: [gentoo-user] Re: Console scrollback

2021-01-13 Thread Grant Taylor

On 1/13/21 4:06 PM, Grant Edwards wrote:
I really should try to figure out a control-character that's not used 
by emacs or the tty driver


I think there are very few, if any, keys used by the TTY driver.

I suspect you are thinking of the line editor in the shell, e.g. readline.

I can see how Control-S (XOFF) and Control-Q (XON) might be part of the 
TTY driver.




--
Grant. . . .
unix || die



Re: [gentoo-user] Console scrollback

2021-01-13 Thread Grant Taylor

On 1/13/21 2:56 PM, Alan Mackenzie wrote:

Hello, Grant.


Hi Alan,

Well, there's really not much that can't be done in a terminal 
emulator.  But it's the manner of the doing that's important.


Okay.  I can appreciate and respect that response.

Doing text work in X is   s l u g g i s h.  Changing from one 
application to another, which would be achieved by, say Alt-F4 on a 
console takes more key sequences in X, and is less than instantaneous.


I don't know that I've ever experienced the sluggishness that you're 
talking about.  But that doesn't mean it doesn't exist.  I will admit 
that Alt-Tabing through windows is an additional step vs Alt-F# in that 
you have the intermediary list that you cycle through vs just jumping 
directly to the desired window.


The X terminal emulator tends not to occupy the whole screen - it tends 
to have title bars, menu items, tabs even, which just distract from 
the task at hand.  Maybe it can be set up to take the whole screen, 
but that's work.  And the fonts used tend to be less distinct and 
helpful than the 16 x 8 bitmaps I have on the console.


Those seem more like preferences / settings to me.  But preferences are 
still sufficient to drive decisions.



And X windows steals useful key sequences, such as Alt-Tab.


True.

On an Emacs session, in three columns on a console, I can display 
195 consecutive lines of a source file simultaneously.


I would expect that to be the same possibility in X and on the console. 
Or quite close counterparts.



I could go on, but ...

That's not to say there aren't problems with the tty console - even 
before the screen scrolling was removed altogether, it wasn't exactly 
anything to write home about.  And it would be nice to have more than 
16 colours available.  But, on balance, I'll stick with the console.


Fair enough.  To each their own.

I think bringing up a new Gentoo system absolutely requires working 
in the console, certainly up to the point where X11 and a Window 
Manager have been installed and debugged.


True.

Thank you Alan, for enlightening me to your work flow and how the 
console is better for you.




--
Grant. . . .
unix || die



Re: [gentoo-user] Console scrollback

2021-01-13 Thread Grant Taylor

On 1/13/21 11:14 AM, Alan Mackenzie wrote:
This is appalling.  I do all my work on the console (apart from web 
browsing), and with this development, Linux effectively becomes 
unusable to me.  I will NOT be bullied into using second rate 
alternatives like X-Windows terminals.


Wow.  I don't think I've run into someone that was a devout 
{physical,virtual} /console/ user in quite a while.


I'm curious what you do in the Linux console that can't be done in a 
terminal emulator.


I know that there is a lot of difference in different terminal 
emulators.  --  I *strongly* prefer XTerm as it does things that other 
terminal emulators have never heard of.


Please share if you do things that /can/ be done in the Linux console 
that /can't/ be done in a terminal emulator.


If it's just preference, then hat's off to you.



--
Grant. . . .
unix || die



Re: [gentoo-user] preventing PC sutdown by power button when running

2020-12-10 Thread Grant Taylor

On 12/10/20 9:20 PM, the...@sys-concept.com wrote:

How to prevent PC from shutdown when running when power button is pressed?
Is it a function in a BIOS or OS?


Press and release, in less than four seconds, is the OS.  Four seconds 
or longer is the BIOS.


Try stopping acpid and seeing if that gets you the desired results.



--
Grant. . . .
unix || die



Re: [gentoo-user] apache blocking access based country

2020-12-08 Thread Grant Taylor
P.S.  You might also be interested in some of the feeds that Team Cymru 
has to offer.  I think they are more friendly to scripted querying.


Link - IP to ASN Mapping Service
 - https://team-cymru.com/community-services/ip-asn-mapping/




--
Grant. . . .
unix || die



Re: [gentoo-user] apache blocking access based country

2020-12-08 Thread Grant Taylor

On 12/8/20 9:59 PM, the...@sys-concept.com wrote:
I'll write a script to check, all the IP's from at text file with 
"whois" and write the output out to another file, just to be sure. 
I don't know how long will it take, the file contains 26611-entries 
(IP addresses).


ProTip:  Don't parse the output from WhoIs directly.  Instead save it to 
a file.  Come up with some file naming scheme that encodes the IPs and 
date.  That way you can easily reference them in the future.  Or decide 
that what you have cached is too old and that you need to update it.


I say this because a number of WhoIs servers get fairly upset if they 
think they are being scripted against.


So ... space out the queries and save the output for future re-use.

You might be correct, Grant.  Putting the IP's in apache .config file 
could be more efficient, instead of .htaccess file.


;-)



--
Grant. . . .
unix || die



Re: [gentoo-user] apache blocking access based country

2020-12-08 Thread Grant Taylor

On 12/8/20 8:50 PM, the...@sys-concept.com wrote:
Creating ACL based on those internet sources eg. 
https://www.countryipblocks.net/acl.php is not reliable.  I pulled 
a list of Russian and Ukrainian IPs from the above link and checking 
some of them, I found these two (and possibly more) are French IPs:


deny from 212.114.16.0/24
deny from 212.114.17.0/24


I can't say as I'm surprised.

IMHO GeoIP feeds are, and always have been, somewhat suspect.  You can 
get information from RIRs based on who the allocated blocks to 
originally (or last update by them).  Or you can get information from a 
service that tries to be much more accurate.  Or you can get information 
from a Default Free Zone BGP feed.  Or any combination of the above. 
But each thing is different quality and different amounts of work.


RIPE's extended delegation list shows 212.114.16.0/21 as being delegated 
to France.


I trust the RIR feeds more.  Though, they might not be updated with IPv4 
trading and resale market.


Personally, I'd extract prefixes of ASNs from a DFZ BGP feed and use 
that to filter.  It will be the most up to date of what a given provider 
(ASN) is advertising.


If "geoip" database is based on similar sources the hole project is 
not a reliable control method.


GeoIP is ... nebulous.  You need to consider if you want to proceed with 
imperfect (or completely wrong data).




--
Grant. . . .
unix || die



Re: [gentoo-user] apache blocking access based country

2020-12-08 Thread Grant Taylor

On 12/8/20 6:17 PM, the...@sys-concept.com wrote:

so it might be easier to for apache, am I correct?


Apache vs iptables is somewhat a preference.

Though with Apache, chances are good that you would need to ban in 
multiple locations, possibly multiple VHOSTs or server wide.  (See more 
below.)


Either way, the apache would have to access the database where all the 
codes are stored or .htaccess file.   Or is it easier if I incorporate 
the IP addressed into main .config file (in apache)?


I personally prefer to put things in files that are included directly 
from the main Apache config file in lieu of .htaccess files.  This harks 
back to a time when checking for a .htaccess file per page request had 
measurable impact.  It just seemed easier to put the content in the main 
config file and skip looking for and processing .htaccess files on each 
request.


I don't know what would be more efficient, storing the data somewhere 
outside of Apache and having it check that -or- putting the data in the 
config / .htaccess file(s).




--
Grant. . . .
unix || die



Re: [gentoo-user] apache blocking access based country

2020-12-08 Thread Grant Taylor

On 12/8/20 4:44 PM, Steve Wilson wrote:
I use this as the first step to limit ssh access to one of my servers: 
`iptables -A INPUT -p tcp -m tcp --dport 22 -m geoip ! --src-cc GB 
-m comment --comment "Drop SSH from outside GB" -j DROP`


Has the geoip match extension been updated to take into account MaxMind 
discontinuing their GeoLite database and the need to support GeoLite2?


This has the advantage that apache doesn't need to process the request, 
but a possible downside that you won't be able to display a message 
if that's a requirement.


You could probably DNAT / REDIRECT to an alternate port that is a 
different virtual host that serves up a 403 page.




--
Grant. . . .
unix || die



Re: [gentoo-user] apache blocking access based country

2020-12-08 Thread Grant Taylor

On 12/8/20 3:55 PM, the...@sys-concept.com wrote:

What are my options apache blocking access based on country?


Do you want to block connections to /just/ Apache and /nothing/ else on 
the system?  Or do you want to block connections from specified sources 
to anything and everything on the system?




--
Grant. . . .
unix || die



Re: [gentoo-user] Re: sendmail configuration

2020-11-27 Thread Grant Taylor

On 11/26/20 6:56 PM, Grant Edwards wrote:
After trying to think of reasons to use sendmail, I beganto wonder if 
it still supports bang-routing and UUCP as a transport mechanism. A 
bit of googling seems to indicate that it does.


Yes.  I have used this a few times in the last 18 months.  Mostly for 
fun through my small UUCP network.


So there's one thing (that I do understand) that can be done with 
sendmail that can't (AFAICT) be done with the usual replacements.


I thought at least one of the other contemporary MTAs also supported UUCP.

I would assume anything that does support UUCP would also support 
bang-routing as I believe that's more of a UUCP function than an MTA 
function.  So I'm surprised at the idea that other things that do 
support UUCP don't support bang-routing.




--
Grant. . . .
unix || die



Re: [gentoo-user] Re: sendmail configuration

2020-11-25 Thread Grant Taylor

On 11/25/20 9:02 PM, Grant Edwards wrote:
O'Reilly's_Sendmail_  4th Edition (the bat book), has 1312 pages and 
weighs four pounds.


There is actually a much smaller book than the quintessential Bat book 
that is multiple orders of magnitude.  IM(ns)HO the Sendmail 
Installation and Operation Guide is well worth reading by anyone that 
wants to be serious about administering sendmail.  I have (re)read 
(multiple versions of) it multiple times in my multi-decade fling with 
Sendmail.  Take part of an afternoon every few years and skim it and / 
or read new / updated parts of it.  It's usually with the sendmail 
source code.  But you can easily search the web for multiple copies of it.


I've read the SIOG multiple times cover-to-cover.  I've never read more 
than 20-30 pages of the Bat book at any given time.


Aside:  I have a low opinion of many O'Reilly books when it comes to 
learning something new.  They are frequently the definitive reference, 
second only to source code.  But reference material is not the best way 
to learn.  Think man page vs tutorials.




--
Grant. . . .
unix || die



Re: [gentoo-user] Re: sendmail configuration

2020-11-25 Thread Grant Taylor

On 11/25/20 9:09 PM, Grant Edwards wrote:
Ah, that's another devine mystery. I believe that the small size of 
a sendmail config file, when compared to the number of malfunctions 
it can create violates several basic tenants of information theory. I 
think the explanation involves extra dimensions that normal software 
can't access.


That's because the sendmail.mc file is not a configuration file in the 
normal sense of the word.  It is a collection of macros that are then 
expanded into the configuration file.


There are many subtle inter dependencies that are not obvious.

Many are documented in the cf/README file in the sendmail source bundle. 
 Not all distros include said file.




--
Grant. . . .
unix || die



Re: [gentoo-user] sendmail configuration

2020-11-25 Thread Grant Taylor

On 11/25/20 9:47 PM, Grant Taylor wrote:
That is supported.  You will need to set up a map and tell Sendmail how 
to use it.  It's not difficult.  But it's been so long that I don't 
remember exactly how to do it.  It's another define(...) or feature(...) 
line and adding entries to the file they reference.


TL;DR:

sendmail.cf:
--8<--
define(`SMART_HOST', `mail.shaw.ca')
FEATURE(`authinfo')
-->8--

authinfo:
--8<--
AuthInfo:mail.shaw.ca   "U:USERNAME" "P:PASSWORD" "M:PLAIN"
-->8--

PSA:  Remember that Sendmail uses one or more tab(s), not a space, to 
separate the Left Hand Side (LHS) and Right Hand Side (RHS) of maps. 
authinfo is a map.


This looks correct.  At least it seems to match my vague memory.

Link - Smart Host setup with SMTP Authentication on Sendmail
 - http://dnsexit.com/support/mailrelay/sendmail.html




--
Grant. . . .
unix || die



Re: [gentoo-user] sendmail configuration

2020-11-25 Thread Grant Taylor

On 11/25/20 6:47 PM, the...@sys-concept.com wrote:

I've always used postifx but I want to try sendmail this time.


I've been using Sendmail for 20 years on multiple Linux and Unixes.


And I have a hard time finding gentoo howto.


Thankfully, much of Sendmail is self contained and isn't much different 
between distros / OSs.  Including Gentoo.


The biggest difference is the location of files.

Some distros / OSs don't include the configuration (m4) files with the 
binary files, thus you must install them as an additional package or 
admin sendmail.cf by hand.


ProTip:  DO NOT EDIT sendmail.cf by hand.  Always Always Always edit the 
sendmail.mc file and re-build the sendmail.cf file*.


*This is the line told to beginning Sendmail administrators.  At some 
point you will edit the sendmail.cf file by hand while testing and then 
promote changes to the sendmail.mc file.  --  Editing sendmail.cf is not 
dissimilar to hex editing a binary compared to editing the source 
(sendmail.mc) file and recompiling.



I runonto some instruction in:
  http://www.quickfixlinux.com/linux/how-to-configure-sendmail-in-linux/


The comp.mail.sendmail newsgroup is your friend.


But don't have much lack.
Original config file after emerge looks like:

cat /etc/mail/sendmail.mc
divert(-1)
divert(0)dnl
include(`/usr/share/sendmail-cf/m4/cf.m4')dnl
VERSIONID(`$Id$')dnl
OSTYPE(linux)dnl
DOMAIN(generic)dnl
FEATURE(`smrsh',`/usr/sbin/smrsh')dnl
FEATURE(`local_lmtp',`/usr/sbin/mail.local')dnl
FEATURE(`local_procmail')dnl
MAILER(local)dnl
MAILER(smtp)dnl
MAILER(procmail)dnl

I've added a line no.7
define(`SMART_HOST’,`mail.shaw.ca’)dnl

but I get an error running:
m4 /etc/mail/sendmail.mc > /etc/mail/sendmail.cf
m4:/etc/mail/sendmail.mc:7: ERROR: end of file in string


Pay very special attention to the opening and closing quotes.

Sendmail makes extensive use of the macro four (m4) language to 
""compile the sendmail.mc file into the sendmail.cf file.  m4 is quite 
particular in what quotes it uses.


define(`SMART_HOST’,`mail.shaw.ca’)dnl
  ^  ^

These quotes look wrong to me.  I don't know if this is a symptom of 
copy & pasting somewhere by someone or what.


I would expect the line to look like this:

define(`SMART_HOST',`mail.shaw.ca')dnl

m4 uses the left single quote (on the ~ key) to open and the straight 
single quote (on the " key) to close.



I forgot to mention that I need to input a password to connect to
provider mail-server when sending a mail.
That is supported.  You will need to set up a map and tell Sendmail how 
to use it.  It's not difficult.  But it's been so long that I don't 
remember exactly how to do it.  It's another define(...) or feature(...) 
line and adding entries to the file they reference.




--
Grant. . . .
unix || die



Re: [gentoo-user] tips on running a mail server in a cheap vps provider run but not-so-trusty admins?

2020-08-28 Thread Grant Taylor

On 8/28/20 6:10 PM, Michael Orlitzky wrote:
I think I see where we're diverging: I'm assuming that the employees of 
the VPS provider can hop onto any running system with root privileges.


Perhaps I'm woefully ignorant, but my current working understanding is 
that no virtual machine hypervisor solution provides a way for someone 
at the hypervisor level to access a guest VM as if they were root.  They 
still need to connect to a terminal (be it console or serial or ssh or 
other), log in (with credentials that they should not have) and access 
things that way.


I see little difference in the full (fat) VM compared to a stand alone 
server.  Safe for the fact that there are ways to cross access memory. 
Though I think those types of things are decidedly atypical.


My mental security model probably completely fails for containers.

I suppose you can make that pretty annoying to do. If you're willing to 
encrypt everything, then you can even put /boot on the encrypted disk, 
unlocking it in (say) grub. The VPS provider can still replace grub 
with something that faxes them your password, but it's not totally 
trivial.  (How are you accessing the console at boot time? Is it using 
software from the VPS provider? It's turtles all the way to hell.)


I'm actually not encrypting the full VM.  I have an encrypted disk.  The 
VM boots like normal, I log in, unlock the encrypted disk, mount it, and 
start services.


So, I feel like I've done the things that I reasonably can do to protect 
my email.


Or said another way, I'm not sure what else I could do that would not 
also apply to a co-lo server.


My VPS provider does offer the ability to access a console so I could 
use full encrypted system.




--
Grant. . . .
unix || die



Re: [gentoo-user] tips on running a mail server in a cheap vps provider run but not-so-trusty admins?

2020-08-28 Thread Grant Taylor

On 8/28/20 4:45 PM, james wrote:
If we can get these codes running on arm64 (R.P.4) surely running them 
on AMD or intel is trivial?


I will be flabbergasted if something would run on the Raspberry Pi that 
won't run on x86 (Intel / AMD).  Presuming that it's complied from 
common source code.



Perhaps a read on "Intel cripple AMD functions is in order?
https://www.agner.org/forum/viewtopic.php?f=1=6


I don't believe this is germane to the primary topic of this thread.


(2) identical R.Pi.4 8gig rams systems, running gentoo.


Okay.


(1) dns resolver codes emails service codes etc
(1) dns resolver codes, webserver to support email services etc.


So each Raspberry Pi is performing a different function.  Okay.

I was wanting to make sure that you weren't wanting to try to do some 
sort of clustering where each Raspberry Pi could stand in for the other. 
 As that's a considerably more complex configuration.



I'm open to the stack (list) of codes necessary to securely run

1. embedded gentoo on R.P.4 (other hardware can be funded by others).

2. Any number of robust email servers-systems (open)


I've recently shared what I have used for email.

3.  a DNS servers to provide "primary dns services" a total of 
(2). More than 2 would be great.


Please elaborate on what you are proposing network connectivity to be? 
Are you thinking the Pi's have globally routed IPs?  As such, primary 
DNS could be 192.0.2.1 and secondary DNS could be at 192.0.2.2?


Note:  It is best practice to have primary and secondary DNS servers in 
different /24 (or larger) networks.


If you are thinking two globally routed IPs, I believe that 
significantly, if not artificially, narrows the number of people that 
could participate as getting multiple IPs on a SOHO Internet connection 
can be challenging and almost always requires additional monthly fees.


Conversely, a single IP with proper network magic is much simpler entry 
point.


4. A companion   ngnix(?) web server just to complement the project. The 
ideas is each email services collective could have their own web pages 
explaining their email and related services.


Okay.  You can run the web server on the same system.  But if you want 
to run it on a separate system, that's fine too.


I'm somewhat confused by your choice of the word "collective".

My anticipation is that many of the people that would be doing this, 
would be doing so for their own person reasons.  Much like I have my 
domain name for my own reasons.


I don't anticipate that people will be offering services to more than a 
few friends and / or family members (if that).


5. On these (3) projects, I'd be open to other, complementary 
experimentation, as long as it is published.


Grant Taylor, do not let it go to your head, but I agree with most 
of what you write in Gentoo User.


Me?  I'm just an idiot on the Internet with some things to say. 
Sometimes they happen to be true.  Ideally, you know (or learn) enough 
to tell which is which.  ;-)


But, thank you.  :-)

6. (2) Rpi4 (8 gig) systems and extras are 2-3 hundred dollars. So it's 
total less than $900 USD dollars. NOT a bid deal for my little corp. 
Actually, if I get what I need, then it's the most inexpensive && robust 
way for my little corp to get exactly what I need. My own small email 
servers and dns resolvers supporting those email services.


Based on some back of the envelope math Sure.

I'm not funding somebody else's idea. I'm funding what *I* want, open to 
input.


That seems reasonable.

Though, I think that some of your requirements are still a bit too 
undefined.  Even independent of what software is used and how it's 
configured, there are still questions:


 - Are IP addresses globally routed or not?
 - Are said IP addresses static or dynamic?
 - What sort of client's will be accessing this?
 - Where will they be accessing from; LAN and / or Internet?

With this effort others benefit from the project. The ultimate goals 
is for hundreds of email services to be setup, gentoo centric.


OK, great. FUND what you want. Run things as you see fit


I have been.

My intention is to see if there is a way that I can contribute to your 
community project without consuming any funds so that other people might 
be able to benefit from your generosity.


Show me a concise, easy to follow set of codes and docs, and I'll just 
build (2) R.P.4 servers and share my docs 100%.


There is more to setting up and running an email server off of a SOHO 
internet connection than just how the email stack is configured.


Forget the fact, for now, that all static IPs Frontier has, are 
blocked by this same group of higher and higher standards. Really, 
I'm kinda shocked NeddySeagoon, or others have not already fixed this, 
via 100% gentoo codes, complete with ample documentation.


That's an example of the type of problem that will need to be overcome 
which is independent of the email server stack.



Just ad

Re: [gentoo-user] tips on running a mail server in a cheap vps provider run but not-so-trusty admins?

2020-08-28 Thread Grant Taylor
On 8/28/20 4:26 PM, Michael Orlitzky wrote:> The contents of the disk 
are unencrypted while the server is powered
on, or at least while the server is receiving email (while it's reading 
from and writing to that disk). In practice that will be all the time 
-- you can't log in and type the disk-encryption password every time 
an email arrives.


You don't need to enter a password every time that an email comes in.

I have a VPS with an encrypted file system.  I enter the password at the 
time that it boots.


The disk and file system(s) therein are encrypted all the time.  So a 
clone of the disk will require the passphrase to unlock the key.


The only way to get the key is to extract it out of the running VPS's 
memory.  Something that I think is beyond the capability of many, but 
definitely not all, people.


A clone of the VPS will effectively present the same security posture as 
the running system.


I've been running like this for five (or more) years without any 
problems.  I think it works great.


I shouldn't have used the word "secret." Pre-established or out-of-band 
authentication would have been more accurate.


Okay.  Poor choice of words happen.  Unfortunately I mistook your 
statement to be referencing symmetric encryption with a shared secret.


With GPG, the trust is between you and I, and the VPS provider acts 
as the eavesdropper. All three parties are distinct, and the security 
can work. With TLS between MTAs, the trust is established on-the-fly 
between the other MTA and the VPS provider, but the VPS provider still 
also plays the the role of the eavesdropper. When the eavesdropper 
is trusted, you're in trouble.


As long as STARTTLS is used (and validated) between the MTAs and the VPS 
provider doesn't have a way to get the keys (because they are on an 
encrypted disk), then the contents of the transmission should be fairly 
secure.




--
Grant. . . .
unix || die



Re: [gentoo-user] tips on running a mail server in a cheap vps provider run but not-so-trusty admins?

2020-08-28 Thread Grant Taylor

On 8/28/20 3:33 PM, Michael Orlitzky wrote:
TLS only secures the channel; what comes out at the end is a plain-text 
message that can be read with minimal effort by the VPS provider, 
no skullduggery needed.


I agree that STARTTLS only protects the email while it's in flight 
between servers.


Though I do think that it's going to somewhat difficult for a VPS 
provider to read the contents of the message if it's stored on an 
encrypted disk.


I think that taking a snapshot of a running VPS / VM with the disk 
encryption keys in memory and accessing it qualifies as skullduggery. 
Plus, they will still need to content with the authentication 
requirements of the running snapshot, just like they would with the 
running VPS / VM.


So things like LUKS definitely raises the bar and makes a VPS provider 
work a fair bit harder to access what's on the encrypted disk.


(And the private key for each TLS session is generated on-the-fly 
by the VPS anyway, so they could snoop on the channel too if they 
wanted to.)


Harvesting keys (TLS and / or LUKS) out of memory definitely qualifies 
as skullduggery.


You can only protect against so much.  You have to find what is 
acceptable risk.


Unless the sender and recipient have some pre-shared secret (like GPG 
assumes),


I *REALLY* thought that PGP (GPG) was based on public & private key 
pairs, much like S/MIME and TLS.


As such, Alice and Bob can encrypt messages to each other, even through 
an untrusted medium such as a questionable email server.


Yes, that still leaves the bootstraping issue of how do Alice and Bob 
get each other's public key.  --  I defer to my recent comments about 
publishing keys in DNS and relying on DNSSEC.


you're going to fall into the same trap that DRM falls into.  The 
technology provides a way for Alice and Bob to communicate securely 
in the presence of Eve, but only when Alice, Bob, and Eve are three 
distinct people. If the VPS is playing the part of both Bob and Eve, 
an off-the-shelf encryption model isn't going to work.


I see no need for either Alice nor Bob to be on the VPS.  I would expect 
that they are their own independent (smart) devices accessing their 
respective email servers.  Don't put any unencrypted sensitive data on 
the central server(s).


Decrypting the emails in any capacity on the central server means that 
the gig is up and anyone with access, OS level or more nefarious, can 
access things.




--
Grant. . . .
unix || die



Re: [gentoo-user] tips on running a mail server in a cheap vps provider run but not-so-trusty admins?

2020-08-28 Thread Grant Taylor

On 8/28/20 1:54 PM, Poison BL. wrote:
I'm rather late to the game with this, but at the end of the day, 
mail coming *into* a mail server isn't typically encrypted (and even 
that is only the body, the headers can still reveal a great deal, 
and are necessary for the server to work with it).


You seem to be referring to S/MIME and / or PGP encryption.  You are 
correct that S/MIME and PGP don't offer protection for headers.


However, STARTTLS provides an encrypted channel to protect all of the 
SMTP traffic.  Thus, even the headers of email are encrypted while in 
flight between servers.


A packet dump at the switch will turn over every piece of mail you 
receive along the way.


When STARTTLS is in use, the only thing that you will see is the initial 
EHLO and STARTTLS commands.  Everything after that will be encrypted 
traffic.



Email's not designed for end to end security by default.


Encryption for anything other than military use didn't really exist when 
email was developed.


Since then, things like STARTTLS (or SMTPS if you choose to abuse it for 
B2B connections) and VPNs have become a realistic option.


Most contemporary MTAs will opportunistically use STARTTLS if it is an 
option.  We only need to worry about things that try to prevent STARTTLS 
from working.  Thankfully, this is mostly authoritarian regimes in some 
parts of the world.


Secondly, any hosting on hardware you don't control is impossible 
to fully secure, if the services on that end have to operate on 
the data at all. You can encrypt the drive, encrypt the mail stores 
themselves, etc, but all of those things will result in the encryption 
key being loaded into ram while the VPS is running, and dumping ram 
from the hypervisor layer destroys every illusion of security you 
had.


I agree those are all valid concerns.  However, we shouldn't let the 
inability to realistically have perfect prevent us from having something 
better than no encryption.


Dedicated hardware in a locked cabinet is as close as you get to 
preventing physical attacks when you're hosting in someone else's DC, 
and that's not nearly in the same market segment, price-wise, as a 
cheap VPS. At best, if you have sensitive email that you're sending 
or receiving, work with the other end of the communication and then 
encrypt the contents properly. Even better, go with a larger scale, 
paid, solution in which your email isn't even remotely worth the 
effort to tamper with for the hosting company's employees, and hope 
the contractual obligations are sufficient to protect you.


I'm not aware of any place that contracts will protect you against local 
court orders / government involvement.


If you have any sort of controlled data going in and out of your email, 
step up to a plan that adheres to the regulatory frameworks you're 
required to adhere to and make very sure the contracts for it obligate 
the vendor to secure things properly on their end (aws, azure/o365/etc 
mostly all have offerings for, at least, US Gov level requirements).


Yep.



--
Grant. . . .
unix || die



Re: [gentoo-user] tips on running a mail server in a cheap vps provider run but not-so-trusty admins?

2020-08-28 Thread Grant Taylor

On 8/28/20 1:18 PM, antlists wrote:
The main reason other applications use "TCP over HTTP(S)" is because 
stupid network operators block everything else!


I agree that filtering is a problem.

I also think that it's something that most people can overcome when they 
control the firewall between the private LAN and the Internet.  (Your 
typical SOHO NATing gateway.)


The few times that I have run into filtering, it has been for 
uninitiated inbound connections.  I've almost always been able to 
initiate outbound connections to / from odd ports.  The few times that I 
could not do so in the last 20 years were resolved by engaging the ISP 
and ... politely ... getting them to knock it off.  Inbound can be more 
tricky.  But even inbound HTTP(S) was subject to the same problems. 
Actually, inbound HTTP(S) was more of a problem than other ports.




--
Grant. . . .
unix || die



Re: [gentoo-user] tips on running a mail server in a cheap vps provider run but not-so-trusty admins?

2020-08-28 Thread Grant Taylor

On 8/28/20 1:55 PM, james wrote:
I'm proposing, via a small corp I own, to purchase up to (3) dual 
Rasp.pi 4 setups of (2) R.Pi.4 8gig ram setups and send them to the 
devs WE all decide on.


A few points.

1)  I don't think that 8 GB of RAM is required.  --  My email server is 
a VPS with 2 GB of RAM and is running just fine.  So, maybe smaller 
systems would work.  And maybe that would mean that more of them could 
be acquired for the same funding.


2)  I don't know that a Raspberry Pi is strictly required for the 
testing.  I think that anything that will run Gentoo can be used to 
prove out the software stack.  --  Sure, there will /eventually/ need to 
be /some/ testing on Raspberry Pis.  But I think that testing will be 
later in the game and more of a confirmation after the fact.


3)  I'm not sure what you mean by "dual ... setups".  What are the two 
systems (be it Raspberry Pis or VPSs or VMs or something else) supposed 
to do?  -  Are you wanting primary and backup (as in MX) or some sort of 
cluster with shared file system or something else?


Let's us start compiling up the codes, keep it simple (for now) and 
implement them with gentoo-users as the testers of the email services.


These discussions should be continued to everyone's benefit. However 
there are way more than (3) folks on these threads who are most capable 
to do this community prototyping.


I think the idea of using VPSs or VMs means that a lot more people can 
participate using the same funding.


If WE do not act and get hundreds of these deployed, email, as we know 
it via RFCS and other standards may just disappeaar, or be relegated to 
the far reaches of the Internet.  What I have read, is standards based 
email services, particularly by small organizations, are under extreme 
pressure by large corporations to be marginalized out of existence.


I think I disagree with that.

Many of the big email operators are enforcing higher and higher 
standards.  But the standards /are/ /open/ and /can/ /be/ /implemented/ 
/by/ /anyone/ who wants to do so.


The /only/ thing that I've seen that is somewhat of a closed system that 
small players -- like myself -- have no real hop of is getting people 
like Google to trust our ARC (not DMARC) signatures.  Though this is 
probably more a shortcoming in the ARC specification as it doesn't 
tackle how to get providers to trust your signature as a small operator.


So any of the folks in these treads can announce publically, or send me 
private email as to your concerns. Public is best, but, I understand the 
needs for private communications sometimes. So yea, I'll personally 
finaces, at least 6 months of (3) projects.
I'll take all input, but will make my (funding) decision, in a focus, 
quick strategy.


I'm happy to participate.  My preference would be to use a VPS / VM 
(which I can provide) and allow others to take advantage of the Pis that 
are on offer.





--
Grant. . . .
unix || die



Re: [gentoo-user] new mail protocol rfc (was Re: tips on running a mail server in a cheap vps provider run but not-so-trusty admins?)

2020-08-27 Thread Grant Taylor

On 8/27/20 11:55 AM, Ashley Dixon wrote:

Well said; thanks for the correction.


Of course.  My intention is to positively contribute to and learn from 
the community.


Mathematical notation can be seen  as  a tightly coupled analogue 
to  this  sort  of  typesetting: the  same  book  that introduced 
Algebraic expressions (Cossike numbers) and  the  equals  sign  ('=') 
into  the  English-speaking world  also  suggested  the   use   of 
the   word "zenzizenizenike" to represent `x^8` [1].  Solid ideas 
will stick due to, as you said, their own merits; the form of the 
representation is  generally redundant.


Nevertheless, as xkcd so brilliantly explains, TeX inspires  a  level 
of  blind trust in the content of a document [2]. As long as you avoid 
proposing standards in the form of an animated GIF, you're probably 
going to be OK. ;-)


I wonder if this is a side effect of the fact that TeX / LaTeX is a 
difficult markup language to work in and takes considerably more time 
and effort than simple text.  As such, there is a good chance that the 
idea that someone takes the time to express in (La)TeX is probably more 
completely thought out than simple text.  After all, why would someone 
spend the time and exert the effort to finely polish a half baked idea 
in (La)TeX?


Disclaimer:  I'm speaking in general and do not mean to imply anything 
towards Caveman's efforts.  It takes gumption to go against the status quo.



I concur, but this was about the reference implementation.


Do you mean reference as opposed to initial.  Meaning that the reference 
implementation has had some time to grow and evolve and be optimized.


Fair enough.

It would be impossible to make the initial implementation the crème 
de la  crème of all implementations, unless the protocol was never 
intended to expand.


With what little I know about statistics, I think that there is a very 
small but still greater than zero percent chance of it happening.  It's 
just *EXTREMELY* unlikely.  ;-)


We do see some reference implementations  being used  as  the  de 
facto  choice  for supporting many standards, such as Apache Tomcat 
as the  ref.   imp.   for  Java Servlets, but as the name would 
suggest,  reference  implementations  are  only intended to be used 
as a reference  to  developers  of  future  implementations.


I don't think anything precludes the use of the reference implementation.

Given that things grow and evolve, I think it means that the reference 
implementation needs to be used /somewhere/ for the people maintaining 
it to gain experience and knowledge germane to said reference 
implementation.  Granted, this can be a small subset and does not need 
to be on the front lines.


Moreover, these ridiculous restrictions only encourage  various 
implementations to deviate from the standard, adding  their 
own  non-standard  extensions  like "HillaryMail HTML support". 
Implementation developers are always going  to  add stupid things to 
their software (just look at  the  GNU  `typeof`  introspection mess), 
but  the  standard  text  itself  should  certainly  not  encourage 
such behaviour.


Indeed.

I also think that it's important to keep in mind that sometimes there 
are external limitations that dictate what can and can not be done. 
Like the fact that communications circuits were not guaranteed to be 
8-bit clean when email (RFC 822 and what predates it) and SMTP (RFC 821 
and what predates it).  It's not any more fair to blame the authors of 
RFC 821 for not supporting 8-bit than it is to blame Sir Tim Burners-Lee 
for not including encryption when he developed HTML and HTTP.




--
Grant. . . .
unix || die



Re: [gentoo-user] tips on running a mail server in a cheap vps provider run but not-so-trusty admins?

2020-08-27 Thread Grant Taylor

On 8/27/20 7:00 AM, Caveman Al Toraboran wrote:

but i this way of looking at protocols (despite being common) is wrong.


Why do you think that it is wrong?

What is not factually correct about it?

i also disagree with the network layering proposed by osi or the 
other ones commonly published in books.  i specially disagree with 
using such layering for studying the complexity of protocols.


If you're going to make such a statement, which is fine to do, you must 
provide information ~> evidence as to why you are doing so and why you 
think what you think.


so i suggest that if we want to study the complexity of messaging 
systems, we better not count SMTP as a single thing (like how it is 
normally done in books and talks), but instead talk about it based on 
the fundamental tasks that it actually does.  this way, SMTP becomes 
at least 2 layers:


I think that I see part of a problem.

RFC 822 - Standard for the format of ARPA Internet Text Message - is 
what defines what I was referring to as the opaque blob sent between 
systems.


I will argue that the content of the opaque blob that SMTP transfers is 
independent of SMTP itself.


1. "resource exchange" layer where binaries are made into a single 
giant text file by base64 encoding and then partitioned by rfc822. 
this part overlaps with http* and is much less efficient (rightfully, 
since email had to be backwards compatible as it is critical).


SMTP* does not support binary in any (original) capacity.  As such, 
email service, which /rides/ /on/ /top/ /of/ SMTP, is where the encoding 
""hack was placed.  This /encoding/ and / or /formatting/ is completely 
independent of the SMTP protocol used to exchange opaque blobs between 
mail servers.


*I did elide some more modern SMTP extensions to simplify the previous 
statement.


To whit:  It is conceptually possible that there could be an SMTP 
exchange consisting of the following:


S:  Hello...
C:  EHLO 
S:  Nice to meet you...
C:  MAIL FROM:
S:  Okay.  Continue
C:  RCPT TO:
S:  Okay.  Continue
C:  DATA
S:  Okay.  Continue
C: 










.
S:  Okay.  Thank you.
C:  QUIT
S:  Goodbye.

The XXX...XXX content is /OUTSIDE/ of the SMTP specification.  That 
content could conceptually be anything that you want it to be.  The only 
limitation is that the communications channel must safely support the 
bit pattern and there must not be any way to cause confusion for the 
protocol outside of it.


SMTP doesn't care /what/ the contents of the XXX...XXX is.  SMTP's ob is 
to exchange the XXX...XXX between servers based on the envelope from and 
recipients.


Some, if not many, email servers have instituted sanity checks to make 
sure that the XXX...XXX has some specific content (headers) and is well 
formatted.  But this sanity checking is outside of the SMTP protocol.


So, your "where binaries are made into a single giant text file by 
base64 encoding and then partitioned by rfc822" is /NOT/ about SMTP. 
Instead it is about RFC 822 - Internet Message Format.


SMTP is RFC 821 - Simple Mail Transfer Protocol

2. "resource use" where the mail server parses such exchanged resources 
(e.g. email bodies, attachments, etc) and then acts upon them (e.g. 
forward them, discard them, etc).


Again, this is outside of the SMTP protocol.  It has nothing to do with 
how the opaque blob is transferred between servers.



and so will pop* and imap.


I'm guessing that you are making a similar mistake with POP3 and IMAP.

Both POP3 and IMAP are also about transferring an opaque blob between 
the email server and the client.


The base POP3 and IMAP protocol do not care what the contents of the 
opaque blob are.  Their goal is to get said opaque blob of XXX...XXX 
from the email server to the client.


I say base protocol because I think there are some - what I'll call - 
fancier POP3 and / or IMAP commands that can be issued to search and / 
or retrieve messages based on content in the blob.


At their base, SMTP and POP3 and IMAP are about transferring opaque 
blobs between servers.


In fact, they are the most recent replacement for alternative methods 
for exchanging the same opaque blobs.  Historically, SMTP replaced FTP 
and UUCP for exchanging the same opaque blobs.  Something could, and 
probably 

Re: [gentoo-user] tips on running a mail server in a cheap vps provider run but not-so-trusty admins?

2020-08-27 Thread Grant Taylor

On 8/27/20 6:07 AM, Victor Ivanov wrote:
I have been quietly following this discussion and I've seen SRS being 
mentioned a number of times.


Welcome to an active part in the conversation.  :-)

Now, I know what SRS _does_ (perhaps not fully?) to prevent unintended 
rejection by a receiving MTA due to SPF policy and I know it's highly 
recommended if not mandatory for any sensible MTA set-up (allegedly).


I don't think that SRS qualifies as recommended, much less mandatory. 
More specifically, I think some of the email industry discourages it.


I know that Google wants people to not use SRS and instead would prefer 
that people forward things without using SRS.  I think that Google wants 
to see the original SMTP envelope sender and make judgement calls on 
that, even if doing so would violate the purported sending domain's SPF 
records.


Note:  SRS means that the domain that is used in the re-written sender 
domain becomes associated with the content.  So if you are forwarding 
and re-writing bad content (virus, spam, or malware) your domain becomes 
associated with said bad content.


Aside:  One of the most important things when forwarding email is to 
*ONLY* forward believed -> known to be clean and good email.  Do *NOT* 
forward spam, virus, malware, or otherwise questionable email.


I've heard / read about many people talking down / bad about SRS.  But, 
I personally have used SRS for 10-15 years and have not knowingly had 
any ill side effects from doing so.  I do conditionally apply SRS to 
outgoing messages from domains that my server is not responsible for (MX 
for inbound email).


However, I'm less aware of the actual use cases where it is absolutely 
necessary to prevent above issues with SPF. Is SRS only relevant for 
MTAs that do any sort of forwarding/open relay?


In short, "yes" and "correct".

You only /need/ (for however much value you give "need") SRS when you 
are sending an email that would otherwise violate the SMTP envelope from 
domain's SPF record.  So you /might/ be able to conditionally apply SRS 
to any outgoing email where there is an SPF policy that you would be 
violating.


This is most likely going to happen when you are forwarding email.  As 
in someone has a .forward file.


I think that an open relay is a bad thing.  Despite the fact that it 
would benefit from things like SRS for the above mentioned reasons, I 
think an open relay should be avoided.  As such, I don't give it much 
more thought.


Oddly enough, I find some of the explanations I came across (including 
Wikipedia) fairly atrocious compared to SPF, DKIM, and DMARC which 
are perfectly explained and their use-cases are very clear. Though 
frankly, these three are fairly self explanatory themselves for anyone 
with technical background.


I think there are those in the email industry that have an adverse 
reaction to SPF in general and view SRS as a nasty hack (which might be 
fair) that is only needed in response to SPF.  They seem to really 
dislike SPF and hate SRS.


I have been running my own MTA for just under a decade without SRS 
(but with the rest of the above) and never had issues with delivery 
- be that local or remote. However, I have explicitly disabled open 
relay and do not make use of any forwarding maps.


That sounds reasonable.  It also sounds like you have little to no need 
for SRS.


I have a few friends that I forward email for.  As such, I have need for 
SRS.



It makes me wonder - am I missing something and have I just been lucky?


I don't think  you're missing anything.  Perhaps you have been lucky in 
that your user base doesn't need forwarding.  But that is perfectly fine 
and perfectly legitimate use case.




--
Grant. . . .
unix || die



Re: [gentoo-user] new mail protocol rfc (was Re: tips on running a mail server in a cheap vps provider run but not-so-trusty admins?)

2020-08-26 Thread Grant Taylor

On 8/26/20 7:07 PM, Ashley Dixon wrote:
I meant (a), in the sense that you  should  probably  write  it  up 
in  a  more presentable fashion than a GitHub README page.  You might 
want to nicely typeset it in TeX or something to make it seem more 
serious. Just a suggestion...


I'm sure there are those that will disagree with me.  But I don't think 
it's as important how professional things look as long as they are sound 
ideas.  Lest it be an ad hominem attack.  Which, as previously indicated 
is not a good thing.


Good ideas should be able to stand on their own.  If Caveman's idea 
turns out to be deemed better on it's technical merits, then the text vs 
HTML vs TeX/LaTeX formatting shouldn't matter.


In which language are you intending to write the reference 
implementation?   I'd suggest writing it in a relatively low-level 
language, so it's  easier  to  read and port without making too 
many assumptions.


I would probably argue that using a mid to higher level language or even 
a pseudo code for documentation / explanation might be advisable.  I 
think that it's more important to get the idea out, in a way that it's 
easily understandable and re-implementable by others.


Is it better to have the first implementation be crem de la crem and the 
overall idea not be adopted?  Or would it be better for the original 
implementation to fade into history while the concept takes over and 
surpasses current email solutions?


You really need to define more goals and non-goals; two non-specific 
goals,  and one non-goal really isn't enough to form an entire 
specification.


I would suggest starting with a problem statement.  Clearly document and 
articulate what you think is wrong with the current email solutions. 
After all, I believe that's the root of the motivation for this.


Once you have a clear problem statement, start developing possible 
solutions.  I encourage multiple -> many if not an order of magnitude 
more than the problems.


Once you have multiple possibilities, then you can objectively compare 
and contrast the possibilities to see which one is the best.


Additionally, I noticed that you have written that the "actual 
message" will  be restricted to UTF-8 with no HTML/JS/CSS,  which  you 
collectively  describe  as "self-hating formats" (?).  While I (and 
most on this list) despise the  use  of the aforementioned formats 
in e-mails to the appropriate extent, I  struggle  to see how you 
are going to prevent them being transmitted using HillaryMail.


If there is anything that the industry is good at, it's encoding things 
such that they can be transported by something that can't natively 
support the unencoded thing.


All of the control codes of HTML are fully representable in ASCII, 
which  is  a strict subset of Unicode.  How are you going to prevent 
people transmitting HTML over the protocol?  It is up to the client 
to parse the HTML into  its  intended aesthetic form; the server 
has nothing to do with it.  The only solution I could imagine is 
rejecting all messages containing attachments with MIME  types  other 
than plain-utf8, but is that really a good idea?


I think trying to restrict things will do more harm to the idea than the 
idea itself would do good.  It's likely to cause people to reject it out 
of hand as why would they want to choose something that fights them?



I am interested, but gravely skeptical.


Well said.

I am not overtly opposed to the concept of replacing SMTP and enhancing 
email.  I just want to make sure that whatever eventually does replace 
SMTP does so because it can stand up to technical scrutiny far worse 
than anything I can throw at it.  It should succeed because it really is 
better, not because someone wants it to be better.


Historians will judge us and the decisions we make harshly, just like we 
judge our previous technical brethren harshly for their decisions.




--
Grant. . . .
unix || die



Re: [gentoo-user] tips on running a mail server in a cheap vps provider run but not-so-trusty admins?

2020-08-26 Thread Grant Taylor

On 8/18/20 6:44 PM, Grant Taylor wrote:

I will have to collect a list and get back to you.


Here are part of some crude notes that I created for myself to use to 
build a Gentoo mail server about three years ago.  This is the email 
specific parts.  The rest were for other non-email aspects.


Note:  This just gets things installed.  It doesn't do anything for how 
to configure things.  (I was copying config files between systems.)


The square brackets at the end of the line are dependencies.  (I use the 
word loosely.)


 - eMail
- Sendmail (mail-mta/sendmail) [ipv6 sasl ssl]
   - emerge -a mail-mta/sendmail
   - chmod g+w /var/spool/mqueue
   - rc-update add sendmail default
   - {mc,cf} file
   - certificates
   - cp /etc/pam.d/{pop,smtp}
   - touch /etc/mail/access
   - makemap hash /etc/mail/access < /etc/mail/access
   - touch /etc/mail/genericstable
   - makemap hash /etc/mail/genericstable < /etc/mail/genericstable
- Cyrus-SASL (pulled in)
   - rc-update add saslauthd default
- Nail (mail-client/nail) []
   - emerge -a mail-client/nail
- Procmail (mail-filter/procmail) []
   - emerge -a mail-filter/procmail
- Mutt (mail-client/mutt) []
   - emerge -a mail-client/mutt
- SpamAssassin (mail-filter/spamassassin) [cron ipv6 ssl]
   - emerge -a mail-filter/spamassassin
   - rc-update add spamd default
   - sa-update
- SpamAssassin Milter (mail-filter/spamass-milter) []
   - emerge -a mail-filter/spamass-milter
   - rc-update add spamass-milter default
- ClamAV (app-antivirus/clamav) [ipv6 milter]
   - emerge -a app-antivirus/clamav
   - rc-update add clamd default
   - /etc/conf.d/clamd
  - START_MILTER=yes
- SPF (mail-filter/libspf2) []
   - emerge -a mail-filter/libspf2
- SRS (mail-filter/libsrs2) []
   - emerge -a mail-filter/libsrs2
- OpenDKIM (mail-filter/opendkim) [ssl]
   - echo "mail-filter/opendkim ldap" > 
/etc/portage/package.use/opendkim

   - emerge -a mail-filter/opendkim
   - rc-update add XXX default
- OpenDMARC (mail-filter/opendmarc) []
   - emerge -a mail-filter/opendmarc
   - rc-update add XXX default
- CourierIMAP (net-mail/courier-imap) [ipv6]
   - emerge -a net-mail/courier-imap
   - /etc/courier/conf files
   - mkimapcert
   - mkdhparams
   - rc-update add courier-imapd-ssl default
   - Install valid SSL cert.



--
Grant. . . .
unix || die



Re: [gentoo-user] new mail protocol rfc (was Re: tips on running a mail server in a cheap vps provider run but not-so-trusty admins?)

2020-08-26 Thread Grant Taylor

On 8/26/20 3:33 PM, Grant Taylor wrote:

I would suggest using any reference to Hillary Clinton.


Typo:  I would suggest *NOT* using any reference to Hillary Clinton.



--
Grant. . . .
unix || die



Re: [gentoo-user] new mail protocol rfc (was Re: tips on running a mail server in a cheap vps provider run but not-so-trusty admins?)

2020-08-26 Thread Grant Taylor

On 8/26/20 2:33 PM, Caveman Al Toraboran wrote:

as for the name "hillarymail", nothing against her.


I would suggest using any reference to Hillary Clinton.  I believe her 
name is too politically charged to use it in good faith.


it's just that i heard so much about hillary's mails up to a point 
all mails started to feel as if they belong to her.


Almost everything I heard about it seemed to be negative.  What little I 
heard that wasn't negative was simply neutral.


My opinion is that the name (independent of H.C.) and H.C.'s association 
would cause me to choose a different name.


I'd suggest something that describes what it does.

Note:  I've not ready your proposal yet.  I've earmarked to do so when I 
have more time.


i intend to eventually write a reference implementation either way 
(hopefully).  specially that this seems to me very easy to implement, 
yet it seems also powerful.


"seems" is the operative word.  I think you will find that there is MUCH 
more to it than what I think you think there is.



not sure what "formal r.f.c." means.

(a) if it means a less ambiguous description, then "yes, but at a 
natural pace based on demand" (in the spirit of occam's razor).


How does Occam's Razor or Parsimony apply to this?


(b) if it means an r.f.c. submitted to isoc/ietf, then "no".


RFCs are decidedly outside of ISOC / IETF.

i think we should ignore standard bodies for awhile since they seem 
to be ignoring us.


Those sentiment / attitude is concerning to me.

Just because you don't like something or disagree with it is not in and 
of itself a reason to ignore or avoid it.  Especially when it is (RFCs 
are) the well established process to do something in the Internet community.


imo that's a parsing error on your side.  to me "idiot" didn't refer 
to smtp/pop/imap users.


I would strongly suggest anything that can be construed as an ad hominem 
attack.


it rather referred to those those who can't use address books or 
bitcoin.


I think those are two very different things.  Address books have existed 
for a long time and are included in all prominent devices / email 
clients in one form or another.  The same can't be said about bitcoin.


either way i've just replaced "idiots" by "people".  "idiot" wasn't 
justified either way.


There's no place, much less need for ad hominem attacks in standards 
documents, be they formal, e.g. ISO, IETF, or non-standards based, e.g. RFC.



i mean easy for both,


I find the goal of easy for the user to use and easy for the developer 
to develop to be almost diametrically opposed to each other.  In my 
experience, it's either been a pay it forward once (developer) or pay it 
back perpetually (user).


Which are you prioritizing?  The developer or the user?

I would strongly suggest the developer pay it forward so that the end 
users don't have to perpetually pay it back.



but subject to the constraints specified under "goals" and "non-goals".

e.g. if becoming easier would cause the protocol to end up needing 
to trust a sys admin, then that's not acceptable.


Elaborate on what "trusting a system administrator" means and why it's 
not acceptable.


I think you will find that there are regimes that will not allow 
technology that they can't see into and observe.  As such, they will 
dictate that the technology trust the system administrator (regime). 
Lest they ban the technology.


but if it is possible to make it easier while still satisfying 
the constraints (goals/non-goals), then that's a good step forward 
(perhaps draft one?).


The requirement to go from informational RFC to standards track RFC is 
multiple independent implementations.  So not only do you need to get 
your implementation working, but you also need to get someone else to 
completely independently create their own implementation and it must be 
interoperable with yours.  Anything less and you'll never achieve 
anything but an informational RFC status.  And you will need a standards 
track RFC status to get the big players to even think about entertaining 
the notion of this.




--
Grant. . . .
unix || die



<    1   2   3   4   5   >