Re: [gentoo-user] tips on running a mail server in a cheap vps provider run but not-so-trusty admins?

2020-08-26 Thread Grant Taylor

On 8/21/20 5:58 PM, Rich Freeman wrote:
It is what just about every other modern application in existence uses. 


VoIP does not.

No RDBMSs that I'm aware of use it as their primary protocol.  (Some may 
be able to use HTTP(S) as an alternative.)


Outlook to Exchange does (did?) not use it.  It does have a sub-optimal 
RPC-over-HTTP(S) option for things like mobile clients.  But you still 
get much better service using native non-HTTP(S) protocols.


I'm not aware of any self hosted enterprise grade remote desktop 
solution that uses HTTP(S) as it's native transport.


Just because it's possible to force something to use HTTP(S) does not 
mean that it's a good idea to do so.


I'm sure there are hundreds of articles on the reasons for this that 
are far better than anything I'd come up with here.


Probably.


And they don't work for anything OTHER than SMTP.


There are /other/ libraries that work for /other/ things.

Having a general thing that can be abused for almost all things is 
seldom, if ever, the optimal way to achieve the goal.


A library for JSON/webservices/whatever works for half the applications 
being written today.


I choose to believe that even that 50% is significantly sub-optimal and 
that they have been pressed into that role for questionable reasons.



This is simple.  This is how just about EVERYBODY does it these days.


I disagree.

Yes, a lot of people do.  But I think it's still a far cry from "just 
about everybody".



http works as a transport mechanism.


Simply working is not always enough.  Dial up modems "worked" in that 
data was transferred between two endpoints.  Yet we aren't using them today.


Frequently, we want optimal or at least the best solution that we can get.

That is the beauty of standards like this - somebody else figured out 
SSL on top of HTTP so we don't need an email-specific reimplementation 
of that.


I think that you are closer than you realize or may be comfortable with.

"somebody else figured out" meaning that "someone else has already done 
the work or the hard part".  Meaning that it's possible to ride people's 
coat tails.


HTTP(S) as a protocol has some very specific semantics that make it far 
from ideal for many things.  Things like the server initiating traffic 
to clients.  Some, if not many, of these semantics impose artificial 
limitations on services.



I mean, why use TCP?


For starters, TCP ensures that your data arrives at the other end (or 
notifies you if it doesn't), that it's in order, and that it's not 
duplicated.


There are multiple other protocols that you can use.  UDP is a prominent 
one.



Why not use something more tailored to email?


Now you're comparing an application layer protocol (email(SMTP)) to a 
network transport protocol (TCP / UDP / etc.).


What transport layer protocol would you suggest using?

Be careful to trace higher layer abstraction protocols, e.g. QUIC, back 
down to the transport layer protocol (UDP in QUIC's case).


TCP probably has a dozen optional features that nobody uses with email, 
so why implement all that just to send email?


What contemporary operating system does not have a TCP/IP stack already?

How are applications on said operating system speaking traditional / 
non-QUIC based HTTP(S) /without/ TCP?


Even exclusively relying on QUIC still uses UDP.

The answer is that it makes WAY more sense to recycle a standard 
protocol than to invent one.


You're still inventing a protocol.  It's just at a higher layer.  You 
still have to have some way / method / rules / dialect, more commonly 
known as a protocol, on whatever communications transport you use.


Even your web based application needs to know how to communicate 
different things about what it's doing.  Is it specifying the sender or 
the recipient or the subject or something else.  Protocols are what 
define that.  They are going to exist.  It's just a question of where 
they exist.


You're still inventing a protocol.

Do you want your protocol to run on top of a taller more complex stack 
of dependencies?  Or would you prefer a shorter simpler stack of 
dependencies?


You're still inventing a protocol.  You're just choosing where to put 
it.  And you seem to be in favor of the taller more complex stack. 
Conversely I am in favor of the shorter simpler stack.


If SMTP didn't exist, and we needed a way to post a bunch of data to 
a remote server, you'd use HTTP, because it already works.


No.  /I/ would not.

HTTP(S) is inherently good at /retrieving/ data.  We have used and 
abused HTTP(S) to make it push data.


Depending what the data was and the interactive nature of it, I would 
consider a bulk file transfer protocol FTPS / SFTP / SCP or even NAS 
protocols.  Things that are well understood and well tested.


If I had some reason that they couldn't do what I needed or as 
efficiently as I needed, I would develop my own protocol.  I would 
certainly do it on top of a transport protocol that ensured my data 

Re: [gentoo-user] tips on running a mail server in a cheap vps provider run but not-so-trusty admins?

2020-08-26 Thread Grant Taylor

On 8/21/20 10:15 PM, Caveman Al Toraboran wrote:
just to double check i got you right.  due to flushing the buffer to 
disk, this would mean that mail's throughput is limited by disk i/o?


Yes.

This speed limitation is viewed as a necessary limitation for the safety 
of email passing through the system.


Nothing states that it must be a single disk (block device).  It's 
entirely possible that a fancy MTA can rotate through many disks (block 
devices), using a different one for each SMTP connection.  Thus in 
theory allowing some to operate in close lock step with each other 
without depending on / being blocked by any given disk (block device).


Thank you for the detailed explanation Ashley.


or did i misunderstand?


You understand correctly.

i sort of feel it may suffice to only save to disk, and close fd. 
then let the kernel choose when to actually store it in disk. 


As Ashley explained, some MTAs trust the kernel.  I've heard of others 
issuing a sync after the write.  But that is up to each MTA's 
developers.  They have all taken reasonable steps to ensure the safety 
of email.  Some have taken more-than-reasonable steps.




--
Grant. . . .
unix || die



Re: [gentoo-user] tips on running a mail server in a cheap vps provider run but not-so-trusty admins?

2020-08-21 Thread Grant Taylor

On 8/21/20 11:54 AM, Caveman Al Toraboran wrote:
thanks.  highly appreciate your time.  to save space i'll skip parts 
where i fully agree with/happily-learned.


You're welcome.

(e.g. loop detection; good reminder, i wasn't thinking about it. 
plus didn't know of acronyms DSN, MDNs, etc; nice keywords for 
further googing).


;-)

i was thinking (and still) if such relay-by-relay delivery increases 
probability of error by a factor of n (n = number of relays in the 
middle).  e.g. probability of accidental silent mail loss is if one, 
or more, accidentally said "yes got it!"  but actually didn't.  i.e.:


It definitely won't be a factor of n, where n is the number of relays.

i wonder if it would be better if only the entry relay aims at the 
confirmation from the terminal server?  this way we won't need to 
assume that relays in the middle are honouring their guarantees, 
hence the probability above would be smaller since k is limited up 
to 2 despite n's growth.


Nope.

Each and every server MUST behave the same way.

care to point part of the rfc that defines "safe" commit to disk? 
e.g. how far does the rfc expect us to go?  should we execute `sync`'s 
equivalent to ensure that data is actually written on disk and is 
not in operating system's file system write buffer?


TL;DR:  Yes on all accounts.

See the recent reply about guarantee and relays for more details.

onion signatures?  e.g. message is wrapped around several layers of 
signatures for every relay in the path?


That doesn't help the problem.  We sort of have an onion already.

It's quite likely possible to validate the signature of the immediate 
sending server.  But how does the receiving server know how to undo any 
modifications that the immediate sending server made to be able to 
validate the previous sending server's signature?  Rinse, later, repeat 
out multiple levels.


What if some of the servers signs what they send vs what they receive?

These are the types of problems that don't currently have a solution.

e.g. whitelisting, tagging, spam filtration, prioritizing, etc, 
based on entities that onion-signed the message.


How is doing those things based on signature different than doing them 
based on the system sending to you?


The only thing that comes to mind is trusting an upstream signature, but 
not the immediate sending system.  But, you have to unwrap the onion to 
be able to validate the signature.  But to do that you need to know what 
the server(s) downstream of the signature being validated did to the 
message.


Some of this is a one way trap door without any indication of what each 
trap door did to the message.




--
Grant. . . .
unix || die



Re: [gentoo-user] tips on running a mail server in a cheap vps provider run but not-so-trusty admins?

2020-08-21 Thread Grant Taylor

On 8/21/20 11:01 AM, Caveman Al Toraboran wrote:

yes, i do consider re-inventing octagonal wheels.


I think that it's occasionally a good thing to have a thought experiment 
about how $THING might be made better.


It's probably good to have discussions around green feel types of 
replacements.


But it's important to eventually assess the pros and cons of the old (as 
it exists), the new (as from green field), and the transition between 
the two.


Sometimes the new doesn't warrant the transition, but it does provide an 
option that might be worth augmenting into the old.


If nothing else, it's good to have the discussions and be able to answer 
why something was done or chosen to remain the same.


here, i'm just "asking" to see what makes the "safely stored" 
guarantee.


MTAs are supposed to be written such that they commit the message to 
persistent storage medium (disk) before returning an OK message to the 
sending server.


There is some nebulous area around what that actually means.  But the 
idea is that the receiving server believes, in good faith, that it has 
committed the message to persistent storage.  Usually this involves 
writing the message to disk, probably via a buffered channel, and then 
issued system calls to ask the OS to flush the buffer to disk.


Is there room for error?  Probably.

Had the server made (more than) reasonable effort to commit the message 
to disk?  Yes.


The point being, don't acknowledge receipt of the message while the 
message is only in the MTA's memory buffer.  Take some steps to commit 
it to persistent storage.


That being said, there are some questionable servers / configurations 
that will bypass this safety step in the name of performance.  And they 
/do/ /loose/ /email/ as a negative side effect if (when) they do crash. 
This is a non-default behavior that has been explicitly chosen by the 
administrators to violate the SMTP specification.  Some MTAs will log a 
warning that they are configured to violate RFCs.


got any specific definition of what makes a storage "guaranteed"? 
e.g. what kind of tests does the mail server do in order to say "yup, 
i can now guarantee this is stored safely!"?


It's more that they do something safe (write the message to disk) 
instead of risky (only store it in memory).


Everything can fail at some point.  It's a matter of what and how many 
reasonable steps did you take to be safe.  Read: Don't cut corners and 
do something risky.



i guess you think that i meant that a relay should be mandatory?


Sending / outbound SMTP servers /are/ a relay for any messages not 
destined to the local server.


There is almost always at least two SMTP servers involved in any given 
email delivery.  All but the final receiving system is a relay.


(yes, a relay doesn't have to be used.  i'm just describing some uses 
of relays that i think make sense.  (1) indicate trust hierarchy, (2) 
offload mail delivery so that i can close my laptop and let the relay 
have fun with the retries.  not sure there is any other use.  anyone?)


There are many uses for email relays.  A common one, and best practice, 
is to have an /inbound/ relay, commonly known as a backup email server. 
The idea being it can receive inbound messages while the primary email 
server is down (presumably for maintenance).


Many SaaS Email Service Providers (ESPs) /are/ relay servers.  They 
receive email from someone and send it to someone else.


A number of email hygiene appliances function as an email relay between 
the world and your ultimate internal email server.  Services that filter 
inbound email qualify here too.


It is common, and I think it's best practice, to have web applications 
send email via localhost, which is usually a relay to a more capable hub 
email server which sends outbound email.  Both of these layers are relays.


A relay is the same thing for email that a router is for a network.



--
Grant. . . .
unix || die



Re: [gentoo-user] tips on running a mail server in a cheap vps provider run but not-so-trusty admins?

2020-08-21 Thread Grant Taylor

On 8/21/20 6:37 AM, Rich Freeman wrote:
This stuff can be interesting to discuss, but email is SO entrenched 
that I don't see any of this changing because of all the legacy issues. 
You would need to offer something that is both easier and better to 
use to such a degree that people are willing to change.


"something that is /both/ *easier* and *better*".

That's a VERY tall bar to clear.


1. Modernizing the actual data exchange using http/etc.

I don't think you'll get much argument that SMTP isn't the way 
somebody would do things if they were designing everything from 
scratch.


SMTP may not be the best, but I do think that it has some merits. 
Merits that the previously mentioned HTTP/2 alternative misses.


Remember, SMTP is a protocol, or a set of rules, for how two email 
servers are supposed to exchange email.  These rules include the following:


1)  Proper server greeting, used to identify the features that can be 
used.  (HELO for SMTP and EHLO for ESMTP.)

2)  Identifying the sender.
3)  Identifying the recipient(s).
4)  Sending the message payload.

Each of these discrete steps is a bi-directional communication. 
Communication that can alter all subsequent aspects of a given SMTP 
exchange.


The server could be an older server that only speaks SMTP and as such 
doesn't support STARTTLS.  Thus causing the client to /not/ try to use 
STARTTLS in accordance with current best practices.  Other ESMTP 
features come into play here like message size, 8-bit, notification 
features, etc.


The receiving server has the ability to assess the sending server's 
behavior at each point to use that as an indication to know if the 
sender is a legitimate email server or something more nefarious and 
undesirable.  --  There is a surprising amount of email hygiene based on 
sending server's behavior.


The receiving server may carte blanch reject messages from specific 
senders / sending domains and / or specific recipients, particularly 
non-existent recipients.


SMTP has low overhead in that the message body is not transferred from 
the sending server to the receiving server if any part of steps 1-3 
aren't satisfactory to the receiving server.


I'm not an API expert but I imagine that just about any of the modern 
alternatives would be an improvement.


Maybe.  Maybe not.

What is the actual problem with SMTP as it is?

What /need/ to be improved?  What could benefit from improvement even if 
it's not /needed/?



http would be a pretty likely protocol to handle the transportation,


Why HTTP?

Did you mean to imply HTTPS?

Why add an additional protocol to the stack?

TCP / SMTP is two layers.

TCP / HTTP / $Email-protocol-de-jure is three layers.

UDP / HTTP / $Email-protocol-de-jusre is three layers.

Why introduce an additional layer?

Why introduce an additional dependency in the application stack?

Why co-mingle email with other non-email traffic?  --  Or are you 
wanting to run HTTP(S) on TCP port 25?



likely with something like JSON/xml/etc carrying the payload.


Why add an additional encapsulation on top of the additional layer?

What does JSON / XML / etc. on top of HTTP(S) provide that SMTP doesn't 
provide?


What is the gain?

Is the gain worth the cost of doing so?

You might want to tweak things slightly so that recipient validity can 
be tested prior to transferring data.


Definitely.  (See above for more details as to why.)

But now you are doing multiple exchanges.

If we extrapolate this out to also include sender validation, and 
probably hello messages, you are back to four or more exchanges.


How is this better than SMTP on TCP port 25?  What is the benefit?  What 
does it cost to get this benefit?


A mail server doesn't want to accept a 20MB encrypted payload if it can 
bounce the whole message based on the recipient or a non-authenticated 
sender or whatever.


Which is one of the many reasons why there are multiple exchanges back 
and forth.


However, in principle there are plenty of modern ways to build a 
transport protocol that would be an improvement on SMTP ...


Please elaborate.

Please include what network layer (3) protocol(s) you want to use and why.

Please include what application layer (7) protocol(s) you want to use 
and why.


Please include any encoding / encapsulation you want to use and why.


... use more standard libraries/etc.


There are many libraries to interface with SMTP.

Also, handing a message off to an SMTP server alleviates the application 
from having to deal with failed deliveries, queuing, and retrying.


Why add that complexity into each and every application?  Especially if 
you don't have to?


Note:  Pulling it in via a library is still indirectly adding it to each 
and every application.


How is SMTP /not/ standard?

Also:  https://xkcd.com/927/  --  Standards

I think this is actually the part of your proposal that might be 
more likely to take off.  You could have a new type of MX field in 
DNS ...


So yet more complexity in 

Re: [gentoo-user] tips on running a mail server in a cheap vps provider run but not-so-trusty admins?

2020-08-21 Thread Grant Taylor

On 8/20/20 7:39 PM, Caveman Al Toraboran wrote:

 1. receipt by final mail server (mandatory).


You're missing the point that each and every single server along the 
path between the original submission server and the final destination 
server is on the hook for delivery of the message -or- notification of 
it's failure back to the purported sender address.  So "final mail 
server" is not sufficient.


Each receiving server in the chain tells the sending server a definitive 
message that means "Okay, I've received the message and I will dutifully 
pass it on to the next server -or- report a problem."


The RFCs defining SMTP don't consider a server crashing / power failing 
/ etc. after saying "Okay..." as sufficient reason to fail to perform 
the job.  Even in the event of a crash / power fail, the receiving 
server MUST process the email when it recovers.


Of course, there are servers that go against the RFC "MUST" directives 
and either don't safely commit messages to disk /before/ saying 
"Okay..." and / or don't deliver failure messages.  There are still 
other servers that don't accept such incoming failure notices.  Both of 
which are against RFC "MUST" directives and as such violating the SMTP 
specification and thereby breaking interoperability.  Particularly the 
"trust" part there of.


These failure notifications have standardized on Delivery Status 
Notification, a.k.a. DSN, and historically called a "bounce".  There has 
been evolution from many disparate formats of a bounce to an industry 
standard DSN.  Said industry standard DSN is defined in RFCs.


DSNs MUST be non-optional for failure.  The only exception is if the 
sending system uses an extra option to specify that a DSN should /not/ 
be sent in the event of a failure.  Receiving systems are compelled to 
send DSNs in the absence of said option.



 2. receipt by end user(s) (optional).
 3. opening by end user(s) (optional).


These notifications are generally considered to be Message Disposition 
Notifications, and are optional.  Further, the client is what sends MDNs 
/if/ it chooses to do so.  MDNs are even more unreliable than DSNs.


(1) is required by the server, else mail will be retransmitted from 
source relay(s) (or client if done directly).  (2) is optional by 
final server, (3) is optional by end user's client.


#1 is generated by RFC compliant servers.  Not all servers are RFC 
compliant.


#2 and #3 are generated by end user clients.  Not all clients are 
willing to to do so.



the job of a relay would be to optionally add some metadata


This really isn't optional.

*SOMETHING* /standard/ *MUST* be added.  If for no other reason than to 
detect and prevent loops.


(e.g. maybe describing sender's role) and sign the whole thing (e.g. by 
company's private key).


I would suggest a /server's/ private key, and *NOT* the company's 
private key.  There is a HUGE difference in potential for private key 
exposure.  If one server is compromised, the entire company could be 
crippled if the /company's/ private key is used.  Conversely, if the 
/server's/ private key is used, then /only/ /the/ /server/ is compromised.


It is quite possible to have the company's public key published such 
that the company's internal CA can issue individual keys to each server.


Signing will be of somewhat limited value as it will quite likely be 
subject to the same problem that DMARC / ARC suffer from now.  Mail 
servers can sign what they receive.  But in doing so, they alter what is 
sent to include their signature.  As such, the data that the next server 
receives is different.  The real problem is working backwards.  Down 
stream servers don't have a reliable way to undo what upstream servers 
have done to be able to get back to the original message to validate 
signatures.


There is also the fact that various fields of the email are allowed to 
have specific trivial modifications made to them, such a line wrapping 
or character encoding conversion.  Such changes don't alter the content 
of the message, but they do alter the bit pattern of the message.  These 
types of changes are even more difficult to detect and unroll as part of 
signature validation.



this way we can have group-level rules.


I'm not quite sure what you mean by group-level rules in this context.



--
Grant. . . .
unix || die



Re: [gentoo-user] tips on running a mail server in a cheap vps provider run but not-so-trusty admins?

2020-08-19 Thread Grant Taylor

First, well said.

On 8/19/20 2:27 AM, Ashley Dixon wrote:
Apologies for my unintended verbosity. My subconscious _really_ 
wanted to point out that SMTP is (generally) RELIABLE. ;-)


Second, this is an understatement.

Per protocol specification, SMTP is EXTREMELY robust.

It will retry delivery, nominally once an hour, for up to five (or 
seven) days.  That's 120-168 delivery attempts.


Further, SMTP implementations MUST (RFC sense of the word) deliver a 
notification back to the sender if the implementation was unable to 
delivery a message.


SMTP's ability to deliver email without end-to-end connectivity is 
almost unmatched.  UUCP and NNTP are there with it.




--
Grant. . . .
unix || die



Re: [gentoo-user] tips on running a mail server in a cheap vps provider run but not-so-trusty admins?

2020-08-18 Thread Grant Taylor

On 8/18/20 4:25 PM, james wrote:

I find all of this *fascinating*.


;-)

So I have threads from 7/28 and others that attempt to discover  the 
(gentoo) packages necessary to run my own email services. I have (2) 
R.Pi4 (8Gram) and (2) more on order to build out complete 
mail/DNS/security for a small/moderate number of folks to use. Just me 
to start/test/debug.


I expect that, other than CPU speed, the four systems that you have are 
probably overkill.


The CPUs may, or may not, be slow depending on the number of messages 
you want to handle a day.  They are probably quite adequate to start 
with for personal email.


I'd like to build out Grant(Taylor) and Ashley's solution for further 
learning and testing, on Rpi4 based gentoo systems. robust security and 
reasonable straightforward (gentoo) admin, is my goal.


Sorry to be pedantic, but please list out what you mean by "robust 
security".  I ask more as an exercise for you to think about, and — more 
importantly — document goals that you'd like to achieve.  This 
documentation may seem somewhat silly, but as has been mentioned 
multiple times in this thread, there are a LOT of options.  So, 
documenting your desires helps reduce compatible options and makes some 
choices for you.


Don't worry if you find that your previous choice limits you.  That will 
happen.  You then need to decide if you want to live with the choice 
-or- go back a few steps and change your choice.


Note:  Changing your choice is perfectly fine.  Call it what it is, a 
change, and deal with it.


The documentation you're creating is sort of a proto / alpha checklist 
of goals that you want to achieve.



Can either or both of you concisely list what I'd need
(the ebuild list) to implement a basic, but complete, secure email 
system, as delineated in your recent posts? I'd be willing to document 
both the build and running tests, for the greater good of the gentoo 
community.


I will have to collect a list and get back to you.

Note:  My list will be biased towards my choices.  Given that I do 
things differently than many email admins, my list is likely to be 
considerably different than others.



If there is interests in the tests and results.


I think that quality documentation is always a laudable goal.

Remember, I started this  some months ago, cause Frontier does not even 
offer basic email services. I hate all thing cloud (deep desire to be 
100% independent of the cloud) and want the ability  to remotely 
retrieve mails and send emails through *my email systems*. I am 
certainly not alone, as some have sent me private email,

with similar desires.


Fair enough.

The big corporations are trying to destroy and remove standards based 
email from the internet.


I haven't seen much where the big players are trying to actively destroy 
standards based protocols.


I have seen where the big players are requiring higher and higher 
standards than they did 5 / 10 / 15 years ago.


Note:  This is neither breaking nor removing standards.  If anything, 
it's adding new public standards and making people adhere to them.


Analogy:  Some states in the U.S.A. aren't removing old vehicles from 
the road.  They are however introducing requirements for vehicles to 
adhere to more strict emission standards -or- register as historic 
vehicles which imposes some restrictions.


For me, it is my most useful, important and most desired feature of 
the internet.


I find email (SMTP(S) & IMAPS) and Usenet news (NNTP(S)) to be two of 
the most critical Internet services to me.


The web (HTTP(S)) is extremely convenient.  But I could live without the 
web, admittedly reluctantly.



I'm ordering up (6) static IPs from Frontier.


Will this be an 8-block (/29) of globally routed IPs?  Or is it going to 
be 6 random IPs in a larger co-mingled IP network?


Start inquiring of Frontier about how to configure Reverse DNS.  Chances 
are good that Frontier will be familiar with RFC 2317 — Classless 
IN-ADDR.ARPA delegation.  —  If you're not familiar with it, I suggest 
you read RFC 2317.


I'd also suggest starting inquiries of Frontier if they Shared Whois 
Project (SWIP) and / or RWhois.  —  My VPS provider doesn't offer SWIP 
or RWhois, and I wish that they did.  —  SWIP and / or RWhois are quite 
nice to have when it comes to making your IP(s) / block(s) stand out 
from other IP(s) / block(s) near yours.  (Think in the same /24).


Note:  Many things on the Internet prefer for name servers to be in 
different /24 networks.  So, having multiple on different IPs in the 
same /24 doesn't count to many people.


At some point, I'll put another primary bandwidth provider under 
this,


I would encourage you to start with a bandwidth provider that you plan 
to stick with for a number of years.  (I know, things change.  Do the 
best you can with the information you have at hand now, and deal with 
change if / when it comes.)


I say this because it takes a fair bit of effort to 

Re: [gentoo-user] tips on running a mail server in a cheap vps provider run but not-so-trusty admins?

2020-08-18 Thread Grant Taylor

On 8/18/20 4:30 AM, Ashley Dixon wrote:
but nothing can replace it in terms of interoperability and 
convenience.


That is an EXTREMELY important point.

SMTP is a protocol that completely independent implementations can use 
to exchange messages with each other.


You can set up gateways to enable different forms of messages to go to 
and come from SMTP.  Things like this allow you to send a fax to an 
email gateway to an MMS gateway to receive a picture on your phone.


There are /SO/ /MANY/  to / from email gateways (which effectively 
means SMTP) that are used each and every day to run the world.


Perhaps the younger generation ... would  almost-unanimously disagree, 
but your proposed solution doesn't really provide greater ease for you, 
or the people e-mailing you!


mail(fire)ball provides something, but I'm quite sure "ease (of use)" is 
decidedly /NOT/ one of the things that it provides.




--
Grant. . . .
unix || die



Re: [gentoo-user] tips on running a mail server in a cheap vps provider run but not-so-trusty admins?

2020-08-18 Thread Grant Taylor

On 8/18/20 5:59 AM, Caveman Al Toraboran wrote:
redundant as in containing concepts already done in other protocols, 
so smtp has many re-invented wheels that are already invented in 
existing protocols.


Please elaborate.  Please be careful to provide information about /when/ 
the protocols that SMTP is supposedly redundant of were developed.


I suspect that you will quickly find that SMTP predates the protocols 
that you are stating it's redundant of.  I further suspect that you will 
find that SMTP predates them by 10, or more likely 20, if not 30 years.


Here's a hint.  SMTP was ~82.  HTTP (1.0) was ~89.  We couldn't post 
thing in HTTP 1.0.  HTTP 2.0 was ~15.



basically smtp, as an application-layer protocol, is needless.


We are all entitled to our own opinion.

imo, smtp should be a much-higher level protocol defined purely on 
top of how dns and http/2.


How do you get any higher layer than the application layer?

e.g. for mail submission, there is no need for a separate 
application-layer protocol as we can simply use http/2.  because the 
concept of mail submission is a special case of data submission, 
which is already in http/2.


HTTP /now/ has a way to submit data.  HTTP didn't exist when SMTP was 
developed.  Further, HTTP didn't have the ability to submit data for a 
while.


If you look at multiple layers of the network stack, HTTP and SMTP are 
both at the application layer.  Now you are suggesting moving equal 
peers so that mail is subservient of / dependent on web?


Does HTTP or the web servers have the ability to queue messages to send 
between systems?  How many web servers handle routing of incoming 
messages to send to other servers?  How dynamic is this web server 
configuration to allow servers for two people who have never exchanged 
email to do so?


This routing, queuing, and many more features are baked into the email 
ecosystem.  Features that I find decidedly lacking in the web ecosystem.



here is a more complete example of what i mean:

1. we lookup MX records to identify smtp servers to submit mails to.
2. from the response to that lookup we get a domain name, say, 
mail.dom.com.


#1 and 2 are par for what we have today.  No improvement.

3. then, the standard defines a http/2 request format to submit 
the mail.


Given how things never die on the Internet, you're going to need both 
SMTP /and/ HTTP /on/ /the/ /email/ /server/ to be able to send & receive 
email with people on the Internet.


So you now have a HUGE net negative in that you have the existing 
service plus a new service.  You're in many ways doubling the exposure 
and security risk of email servers.



an example of step (3) could be this:

 https://mail.dom.com/from=...=...=...\
 =...=...=...=...\
 =...


If you were to do this, you would NOT do it via GETs with URL 
parameters.  You would do it as POSTs.


You will also have to find a way to deal with all the aspects of SMTP 
and RFC 822 email + mime.  I suspect that you will find this to be 
extremely tricky.  Especially if you try to avoid SMTP / RFC 822 
semantics b/c SMTP is apparently a bad thing.


How does your example scheme account for the fact that the SMTP envelope 
from address frequently diff from the RFC 822 From: header?  Remember, 
this very feature is used thousands of times per day.  So you have to 
have it.


There's also many Many MANY other features of SMTP that are also used 
thousands of times a day that I'm seeing no evidence of.



 i don't know how http/2 works.  do they have
 POST requests?  if so maybe fields attach1,
 attach2, ..., attachn can be submitted as file
 uploads using POST.

further, if we modify steps (1) and (2), we can generalise this 
concept into tor services.  e.g.  an email address simply becomes an 
onion address.  e.g. if vagzgdrh747aei0q.onion is the hidden service 
address of your mail server, then your email address could be written 
as (for convenience):


 remco@vagzgdrh747aei0q.onion


I ... I don't have the words.  Go run that idea past an SEO expert.

Go ask people to drop their domain name in favor of a hash.

I'm not going to hold my breath.

How are you going to handle the billions of email clients that exist 
today, many of which will never be updated, to handle ToR?  You're going 
to have to have something to gateway old and new.


That means that you're still going to have steps #1 and #2.  You can't 
get away from them without everybody and everything migrating to the new 
system.  Even then, chances are still extremely good that you're /still/ 
going to have #2.


and when a "mail" client tries to submit you an email, it submits it 
by this url:


 https://vagzgdrh747aei0q.onion/to=remco&...etc.


I haven't the words.

then, in order to authenticate a source, we simply use public-private 
keys ...


Because that has worked out so well and with so few problems in the past 
25 years.



... to sign messages.


Even /more/ unlikely to be 

Re: [gentoo-user] tips on running a mail server in a cheap vps provider run but not-so-trusty admins?

2020-08-18 Thread Grant Taylor

On 8/18/20 1:00 AM, Caveman Al Toraboran wrote:
not specifically with a mail provider, but with other i.t. services, 
yes.  and since they're all humans, then the simplest model that 
explains this is that this is about humans in general, and same past 
experience would extend to mail provider's admins.


To each their own.


yes.  smtp is nasty, and also redundant.


I disagree.

Simple Mail Transfer /Protocol/, as in the application layer language 
spoken between mail servers is fairly elegant.


What is done on top of that and with the data that goes back and forth 
is far more iffy.


I also disagree that SMTP is redundant.  I'm not aware of anything else 
nearly as ubiquitous as SMTP for getting messages between systems in a 
fault tolerant manner.


makes me wonder if i should just create me a hidden tor service that 
is just a normal website, and give its url to people (instead of email) 
who want to message me by telling them ``submit your messages to me''.


So you want to change from a ubiquitous protocol that is supported by 
many Many MANY devices to niche protocol that has a non-trivial 
installation / configuration curve.


then, verify messages by mailing their supplied email a confirmation 
message.


And then you want to take what people send you, turn around and send 
unsolicited messages based on it — this is the icing on the cake — using 
the protocol that you are trying to avoid.


It's only a matter of time before someone uses your Tor hidden service 
as a vector to send spam.  —  Joe Job comes to mind.




--
Grant. . . .
unix || die



Re: [gentoo-user] tips on running a mail server in a cheap vps provider run but not-so-trusty admins?

2020-08-18 Thread Grant Taylor

On 8/18/20 12:43 AM, Caveman Al Toraboran wrote:
would i get blacklisted for simply not using spf/dkim/etc?  even if 
no other user is using the mail service other than me and i'm not 
mass mailing?


I don't think it's that you would be black listed per say.

Rather, I think it's that nothing would cause your email to stand out / 
up and appear more legitimate than the general background noise.


You want to stand out from the background noise so that you aren't 
subject to the almost default block of a lot of background noise.


It also provides something for positive (and negative) reputation to be 
associated with.


Think "some random person said" vs "Caveman said".  Which will mean more 
in the circles you travel in?




--
Grant. . . .
unix || die



Re: [gentoo-user] tips on running a mail server in a cheap vps provider run but not-so-trusty admins?

2020-08-17 Thread Grant Taylor

On 8/17/20 6:10 AM, Wols Lists wrote:
Yup. If you've got mail DNS records pointing at your home server, 
incoming mail shouldn't be a problem and your vps admin can't snoop 
:-)


True.

But the ISP can still sniff the traffic and you can be subject to DPI.

Can't you tell your server to forward all outgoing mail to your 
ISP's SMTP server? That way, you don't have to worry about all the 
spam issues, and it *should* just pass through.


That can start to run afoul of some SPF configurations.  Or you must 
allow your ISP's SMTP server to send email as you.  Which means that 
other ISP users can also send email as you.  You are also beholden to 
the ISP's SMTP infrastructure not changing, lest a change on their end 
breaking your SPF configuration.  I would probably recommend an ESP's 
SMTP service over your ISP's SMTP service as the ESP will have more 
experience with this because it's part of their business model.


"Should" is the operative word.

There is also the fact that your outbound email will now potentially, if 
not likely, sit in the ISP's SMTP server queue, thus re-introducing an 
opportunity for it to be scrutinized.


The main worry for snooping is inbound mail waiting for collection - 
outbound requires a dedicated eavesdropping solution and if they're 
going to do that they can always snoop ANY outgoing SMTP.


It depends what you mean by "dedicated eavesdropping solution".  General 
network sniffing and / or DPI does not fall under many definitions of 
dedicated.


Carte blanch redirecting / intercepting SMTP traffic through one of 
their hosts is also possible.


Your local / residential ISP can't do anything if you tunnel your 
outbound SMTP through an encrypted connection to a VPS.  But that 
re-introduces other complications of VPSs.




--
Grant. . . .
unix || die



Re: [gentoo-user] tips on running a mail server in a cheap vps provider run but not-so-trusty admins?

2020-08-17 Thread Grant Taylor

On 8/17/20 5:33 AM, Ashley Dixon wrote:
How many concurrent users will be connected to the mail server? How 
much traffic will the S.M.T.P.  server receive (read: how many 
e-mails  arrive  on  a  daily basis)?


My main VPS has a single digit number of clients and processes anywhere 
between 50,000 and 200,000 emails per day.  It does so without any problem.


If you really don't trust your V.P.S. provider, and your mail server 
is small-ish, you could just skip all the trust issues and buy a 
cheap Raspberry Pi for £20 or so.


The VPS includes a globally routed IP, something that a Raspberry Pi 
doesn't inherently include.  The connectivity, including reverse DNS, is 
a big issue for running an email server.


Running a mail server over a domestic connection presents some 
issues,  such  as dynamic I.P. ranges appearing in the Spamhaus 
blocklist, or some tyrannicalesque I.S.P.s blocking outbound port 25 
(S.M.T.P. submission port),


Nitpick:  SMTP's /submission/ port is TCP 587.  "Submission" is a very 
specific term in SMTP nomenclature.  Specifically client's /submitting/ 
email into the SMTP ecosystem.  Server to server happens over the SMTP port.


I believe you mean the regular SMTP port, TCP 25.

but it is possible to have a smooth, self-administered mail server, 
providing you can  put  in  the time and effort.


Agreed.

ProTip:  Running an email server is about more than just SMTP.  You 
really should have a good working understanding of the basics of 
multiple protocols and technologies that are part of the email ecosystem:


 - SMTP protocol
 - DNS protocol
 - POP3 and / or IMAP client access protocols
 - MTA
 - LDA
 - Virus filtering
 - Spam filtering
 - SPF
 - DKIM
 - DMARC
 - RBLs
 - RWLs
 - Client operations
 - email ecosystem nomenclature

That's just the short list.

When I say "have a good working understanding", I mean that you should 
be able to provide a 101 level 30-90 second description of each of those 
items.  Actual understanding, not just wrote memorization.



I have been doing it myself for a few years with Courier and Postfix


I've been doing it for 20+ years with multiple MTAs, multiple client 
MUAs, multiple 3rd part  as a service providers.  None of any of 
the components is difficult itself.  The annoying thing comes when you 
try to get multiple to interact well with each other.



(although I wouldn't recommend Courier; Dovecot is far superior).


To each their own.  I chose Courier because it could do things that 
Dovecot couldn't (at the time I made the decision) and fit my needs 
considerably better.


Some of the things that you need to make decisions about are learned 
about with experience, usually unfavorable experience.  As in "crap, I 
don't like the way that works".  Thus you make a new decision.


There is (or used to be) much debate about should email accounts be real 
and have backing Unix (OS) level accounts, or should they be virtual and 
fall under the auspice of one single Unix (OS) level account that the 
client access protocol daemon(s) run as.  From a purely email 
perspective, this might not matter.  But it really starts to matter if 
you want friends that have email with you to also be able to host a web 
site with you and need to connect in to manage their site, thus needing 
a Unix (OS) level account to do so.



What do you think?


There are MANY different ways that you can combine the things I listed 
above.  It is usually a personal choice.  Some things that work out well 
in one configuration are completely non-applicable or even detrimental 
in another configuration.


There are many recopies to get started.

You really need to start somewhere, learn as you go, and make your own 
choices.




--
Grant. . . .
unix || die



Re: [gentoo-user] tips on running a mail server in a cheap vps provider run but not-so-trusty admins?

2020-08-17 Thread Grant Taylor

On 8/16/20 10:50 PM, Caveman Al Toraboran wrote:

hi.


Hi


context:

1. tinfoil hat is on.


Okay.

2. i feel disrespected when someone does things to my stuff without 
getting my approval.


Sure.

3. vps admin is not trusty and their sys admin may read my emails, 
and laugh at me!


Do you have any (anecdotal) evidence that this has actually happened?

Hanlon's razor comes to mind:

   Never attribute to malice that which is adequately explained by 
stupidity.


My experience supports Hanlon's razor.

This doesn't mean that there aren't malicious admins out there.  Many in 
our industry have fun with the B.O.F.H. and P.F.Y.  But I think that's 
more what we want to do -- if there were no repercussions -- and not 
what we actually do.  *MANY* people talk a big game.  I've seen few 
follow through on the boasting.


4. whole thing is not worth much money.  so not welling to pay more 
than the price of a cheap vps.


That is your choice.  I personally find that my email / DNS / website is 
worth ~$240 a year.  I could probably do it for ~$120 a year if I wanted 
to drop redundancy.


I could theoretically do it for $60 a year if I wanted to lower 
functionality.



moving to dedicated hardware for me is not worth it.


Fair enough and to each their own.

I used to have dedicated hardware in my house, and then migrated to VPS 
based solutions as part of a cross country move without a static IP on 
the destination end.


my goal is to make it annoying enough that cheap-vps's admins find 
it a bad idea for them to allocate their time to mingle with my stuff.


I'd like to hear any (anecdotal) evidence of this happening that you have.

If there is anything, I'd suspect that it's bulk Deep Packet Inspection 
monitoring things.  I doubt that actual malicious involvement is common.



thoughts on how to maximally satisfy these requirements?


Well, seeing as how you're talking about email, the biggest elephant in 
the room is SMTP's default of unencrypted communications path.  It's 
realtively easy to add support for encryption, but more systems than I'm 
comfortable with don't avail themselves of the optional encryption for 
some reason.  Sure, it's possible to configure many receiving SMTP 
servesr to require it from specific sending systems and / or sending 
domains.  But this is effort you have to expend to enact these restrictions.


Actual encrypted email; S/MIME, PGP, etc. help in this regard.



--
Grant. . . .
unix || die



Re: [gentoo-user] Re: How to hide a network interface from an application

2020-08-16 Thread Grant Taylor

On 8/16/20 5:07 AM, Neil Bothwick wrote:
Going OT here, but why do you dislike Docker? I've only recently 
started using it, so if there are any major, or otherwise, drawbacks, 
I'd like to know before I get too entwined in their ecosystem.


Why do I need one or more (more with older versions) additional daemons 
to run simple services or virtual routers (network namespaces)?


I don't like many of the implications which, as I understand it, Docker 
imposes.


Conversely I can do what I want with a few relatively simple (to me) 
commands directly in init scripts.




--
Grant. . . .
unix || die



Re: [gentoo-user] Re: How to hide a network interface from an application

2020-08-15 Thread Grant Taylor

On 8/13/20 6:03 PM, Alexey Mishustin wrote:

Isn't this classic option suitable?

iptables -A OUTPUT -i  -m owner --gid-owner noinet -j DROP


Ugh.

I'm sure that's a viable method to deal with the problem after the fact.

But I prefer to not have the problem in the first place.  Thus no need 
to deal with it after the fact.


I dislike Docker, but I do like the idea of containers or network 
namespaces.  As such, I think it should be relatively trivial to create 
a network namespace that has what you need without too much effort.  I'd 
think that some judicious "unshare" / "nsenter" / "ip netns exec" 
commands would suffice.


I run BIRD in multiple network namespaces (think virtual routers) for 
things with a few commands and NO Docker, et al.


   unshare --mount=/run/mountns/${NetNS} --net=/run/netns/${NetNS} 
--uts=/run/utsns/${NetNS} /bin/hostname ${NetNS}
   nsenter --mount=/run/mountns/${NetNS} --net=/run/netns/${NetNS} 
--uts=/run/utsns/${NetNS} /bin/ip link set dev lo up
   nsenter --mount=/run/mountns/${NetNS} --net=/run/netns/${NetNS} 
--uts=/run/utsns/${NetNS} /usr/sbin/bird -P /var/run/bird.${NetNS}.pid 
-s /var/run/bird.${NetNS}.ctl


You can replace /usr/bin/bird ... with whatever command you need to 
start Plex.


Obviously you will need to add the network interface to connect from 
your physical network to the network namespace and configure it 
accordingly.  But that's relatively trivial to do.


I find these types of network / mount / UTS namespaces, containers, to 
be extremely lightweight and easy to do things in.  I've created some 
wrapper scripts to make it trivial to add / list / remove such 
containers; mknns, lsnns, rmnns.




--
Grant. . . .
unix || die



Re: [gentoo-user] Re: How to hide a network interface from an application

2020-08-15 Thread Grant Taylor

On 8/13/20 4:03 PM, Grant Edwards wrote:
I'm not sure what "go out of your way" means in this context.  I assume 
I'd create a network namespace for Plex, and then use either macvlan 
or ipvlan to share one of the physical interaces between the root 
namespace and the Plex namespace.


I've found that MACVLAN / MACVTAP, and I assume IPVLAN / IPVTAP, have a 
bit of a flaw.  Specifically, I've not been able to put an IP address on 
the parent interface, e.g. eth1, and get communications between the host 
and the {MAC,IP}V{LAN,TAP} clients.  To get such host to 
{MAC,IP}V{LAN,TAP} communications, I've had to add an additional 
{MAC,IP}V{LAN,TAP} and put the host's IP on that.


Conversely, I've been able to use traditional bridging or OVS to 
accomplish this.


I'd like the 'lo' interfaces to be shared as well, but I'm not sure 
that's possible.


I think that's contrary to how network namespaces work.

I've got a colleague at work who has written a proxy program that will 
listen on a port in one network namespace and connect to the same (or 
optionally different) port in another network namespace.  It sort of 
behaves much like OpenSSH's local port forwarding going from one network 
namespace to another network namespace with the service running. 
Somewhat akin to SSH agent forwarding.




--
Grant. . . .
unix || die



Re: [gentoo-user] which filesystem is best for raid 0?

2020-08-12 Thread Grant Taylor

On 8/12/20 5:56 PM, Adam Carter wrote:
Depends on your use case, ... so what you use will depend on 
speed/reliability trade off.


There are some specific uses cases where speed is desired at least an 
order of magnitude more than reliability.



ext2 is less reliable due to it missing the journal


Some cases, that's an advantage.

Consider a use case where having the files is a benefit, but not having 
the files only means that they are fetched over the network.  Like a 
caching proxy's on disk cache.


If the caching proxy looses it's disk cache, so what.  It re-downloads 
the files and moves on with life.


RAW /speed/ is more important in these types of limited use cases.

As such, the journal is actually a disadvantage /because/ it slows 
things down somewhat.




--
Grant. . . .
unix || die



Re: [gentoo-user] can't mount raid0

2020-08-12 Thread Grant Taylor

On 8/12/20 1:28 PM, Никита Степанов wrote:

livecd gentoo # mount /dev/md1 /mnt/gentoo
mount: unknown filesystem type 'linux_raid_member'
what to do?


What does /proc/mdstat show?

Is it a partitioned software RAID?  If so, you need the partition 
devices and to mount the desired partition.




--
Grant. . . .
unix || die



Re: [gentoo-user] which filesystem is best for raid 0?

2020-08-12 Thread Grant Taylor

On 8/12/20 11:53 AM, Никита Степанов wrote:

which filesystem is best for raid 0?


I'm guessing that you're after speed more than anything else since 
you're talking about RAID 0.


As such, I'd suggest avoiding a journaling file system as that's 
probably unnecessary overhead.


I'd consider ext2 for something like a news spool where performance is 
more important and the data is somewhat ephemeral.  Likewise for a 
caching proxy spool.




--
Grant. . . .
unix || die



Re: [gentoo-user] What is faster: amd64 or x86?

2020-08-11 Thread Grant Taylor

On 8/11/20 10:37 AM, Gregor A. „schlumpi“ Segner wrote:

it’s total nonsense today to install a 32bit kernel on a 64Bit machine.


I can see some value in having a 32-bit /only/ system if you /must/ 
support 32-bit software with no need for 64-bit and would like to avoid 
the complexity of multi-lib.




--
Grant. . . .
unix || die



Re: [gentoo-user] Re: Local mail server

2020-08-07 Thread Grant Taylor

On 8/7/20 2:06 PM, james wrote:

Here is an short read on the acceptance and usage of IPv6:

https://ungleich.ch/u/blog/2020-the-year-of-ipv6/

So, yes I am working on using IPv6, with my RV/mobile-lab.


I think that IPv6 is a good thing.

But I would be remis to not say that IPv6 is somewhat of a black sheep 
in the email administrators community.


You still effectively must have IPv4 connectivity to your email server, 
lest a non-trivial percentage of email fail to flow.


I also know of a number of email administrators that are specifically 
dragging their feet regarding IPv6 because there hasn't yet been 
critical mass use of IPv6 /for/ /email/.


In fact, some of the early IPv6 adopters for email are spammers.  So 
some administrators stim this tide by being exclusively IPv4.


I think dual stack for email servers is great.  (Deal with the spam.) 
But being exclusively IPv6 on an email server is going to be problematic.


I'm focusing on email servers because that's what this thread had 
largely been about.




--
Grant. . . .
unix || die



Re: [gentoo-user] Re: Local mail server

2020-08-01 Thread Grant Taylor

On 8/1/20 1:53 PM, antlists wrote:
That's one of the good things about the UK scene. In theory, and mostly 
in practice, the infrastructure (ie copper, fibre) is provided by a 
company which is not allowed to provide the service over it, so a 
mom-n-pop ISP can supposedly rent the link just as easily as a big ISP.


For a long time, the incumbent telephone carrier was required to allow 
other companies to access the DSL network and provide service.


I've not kept up with the laws and have no idea of the current state.

When we move I'll almost certainly move to Andrews and Arnold, who are 
exactly that mom-n-pop setup that are run by a bunch of engineers, as 
opposed to accountants.


:-)



--
Grant. . . .
unix || die



Re: [gentoo-user] Re: Local mail server

2020-08-01 Thread Grant Taylor

On 8/1/20 5:36 PM, Grant Edwards wrote:

Statically entered in the DHCP server doesn't count as static?


Not to the client computer that's running the DHCP client.

The computer is still configured to use a dynamic method to acquire it's 
IP address.




--
Grant. . . .
unix || die



Re: [gentoo-user] Re: Local mail server

2020-08-01 Thread Grant Taylor

On 7/31/20 2:01 PM, Grant Edwards wrote:
There may be half way decent ISPs in the US, but I haven't seen one 
in over 20 years since the last one I was aware of stopped dealing 
with residential customers.  They were a victem of the "race to the 
bottom" when not enough residential customers were willing to pay $10 
per month over what Comcast or US-West was charging for half-assed, 
crippled internet access).


I think there is probably a good correlation between size and desire to 
be good and provide service.


I've found that smaller ISPs (who actually try as opposed to cheating 
people) tend to be better.  Sadly, many of these Mom & Pop type ISPs 
were consumed during the aptly described race to the bottom.


:-(

I still do consulting work with a small M ISP in my home town and I 
have a small municipal ISP where I am now.  Both are quite good in many 
regards.  Unfortunately, neither of them offer IPv6.




--
Grant. . . .
unix || die



Re: [gentoo-user] Re: Local mail server

2020-08-01 Thread Grant Taylor

On 7/31/20 2:05 PM, Grant Edwards wrote:
Nit: DHCPv6 can be (and usually is) dynamic, but it doesn't have to 
be. It's entirely possible to have a static IP address that your OS 
(or firewall/router) acquires via DHCPv6 (or v4).  [I set up stuff 
like that all the time.]


Counter Nit:  That's still acquiring an address via /Dynamic/ Host 
Configuration Protocol (v6).  It /is/ a /dynamic/ process.


Static IP address has some very specific meaning when it comes to 
configuring TCP/IP stacks.  Specifically that you enter the address to 
be used, and it doesn't change until someone changes it in the 
configuration.


Either an IP address is statically entered -or- it's dynamic.

The fact that it's returning the same, possibly predictable, address is 
independent of the fact that it's a /dynamic/ process.




--
Grant. . . .
unix || die



Re: [gentoo-user] Re: Local mail server

2020-08-01 Thread Grant Taylor

On 7/31/20 1:54 PM, Grant Edwards wrote:
If I had a week with nothing to do, I'd love to try to get something 
like that working


You don't need a week.  You don't even need a day.  You can probably 
have a test tunnel working (on your computer) in less than an hour. 
Then maybe a few more hours to get it to work on your existing equipment 
(router) robustly and automatically on reboot.


I encourage you to spend that initial hour.  I think  you will find that 
will be time well spent.


Hurricane Electric does have something else that will take more time, 
maybe a few minutes a day over a month or so.  Their IPv6 training 
program (I last looked a number of years ago) is a good introduction to 
IPv6 in general.  Once you complete it, they'll even send you a shirt as 
a nice perk.


Note:  H.E. IPv6 training is independent and not required for their 
IPv6-in-IPv4 tunnel service.



but, I assume you need a static IPv4 address.


Nope.  Not really.

You do need a predictable IPv4 address.  I'm using a H.E. tunnel on a 
sticky IP (DHCP with long lease and renewals) perfectly fine.


If your IP does change, you just need to update the tunnel or create a 
new one to replace the old one.  This is all manged through their web 
interface.




--
Grant. . . .
unix || die



Re: [gentoo-user] Local mail server

2020-08-01 Thread Grant Taylor

On 7/31/20 12:01 PM, james wrote:

yep, at least (2) static IPs.


You can actually get away with one static IP.  It's ill advised.  But it 
will function.


You can also have external 3rd party secondary DNS servers that pull 
from your (private) primary DNS server.  You might even be able to get 
this communications over a VPN if the secondary DNS server operator is 
cooperative.


Once running I'll find a similar bandwidth usage organization and swap 
DNS secondary services.


That's a nice idea.  But I've not bothered with that in about 18 years.

I have Linode DNS servers be secondaries for my domains and point the 
world at them.  I'm still in complete control of the domains via my 
personal primary DNS server.


Note:  I'm not offering reciprocal secondary DNS service.

This is trivial (for Linode) perk that I get by being a customer for 
other things.  I think a single < $5 / month VPS qualifies me.  (I don't 
remember if there is a lower tier VPS or not.)


Now days with all the issue wit CA  and others similar/related 
issues. that might get complicated.


Don't let those features blind you, especially if you don't want to use 
their features.  Also be mindful of ascribing credit them if they are 
simply front ending something like Let's Encrypt, which you can do on 
your own for free.



(2) static IPs for (2) dns primary resolvers should get me going.


1 static IP somewhere will get you started.  ;-)


Verizon killing its email services:

https://www.inquirer.com/philly/blogs/comcast-nation/Verizon-exiting-email-business.html 


I'm not at all surprised.

Well, it's probable not appropriate for me to "finger" specifics. But if 
you just learn about all the things some carriers are experimenting 
with, in the name of 5G, it is a wide variety experimentation, to put it 
mildly.


5G is just the latest in a long line of motivators that have caused 
providers to do questionable things.


Forking the internet into 1.China & pals  2. European Member states. 3. 
USA and allies.


I've not yet seen any indication that these Geo Political issues have 
influencing the technological standards that are used.  Sure, they are 
influencing who they are used with, and in some cases /not/ used with. 
But, thus far, the underlying technical standards have been the same.


But someone like you (Grant) could help guide and document a gentoo 
centric collective that provides for email services, secure/limited 
web servers and a pair of embedded/DNS (primary) resolvers so we can 
keep email systems alive.


A couple of things:

1)  Nothing about what I'm suggesting is Gentoo, or even Linux, 
specific.  The same methodologies can be used on other OSs.


2)  I don't think that email is going to die.  It certainly won't do it 
faster than Usenet has (not) done.  (Usenet is still alive and quite 
active.)


Yes, email is growing and changing.  But each and every one of us that 
thinks about running our own email server has a tiny bit of influence in 
that through our actions.



Thanks  for your insight and suggestions.


You're welcome.  :-)



--
Grant. . . .
unix || die



Re: [gentoo-user] Local mail server

2020-08-01 Thread Grant Taylor

On 7/31/20 1:39 PM, james wrote:

I'd like to start with a basic list/brief description of these, please?


They basically come down to two broad categories:
1)  Have the ""static IP bound to an additional network interface on the 
destination system and leverage routing to get from clients to it.
2)  Have the ""static IP bound to a remote system that forwards traffic 
to a different address on the local system.


Traffic frequently spans the network between the local system and the 
remote system through some sort of VPN.


Note:  VPNs can be encrypted or unencrypted.

I think one of the simpler things to do is to have something like a 
Raspberry Pi (a common, simple, inexpensive example) SSH to a Virtual 
Private Server somewhere on the Internet and use remote port forwarding.


   root@pi# ssh root@vps -R 203.0.113.23:25:127.0.0.1:25

Note:  I'm using root to simplify the example.  Apply security best 
practices.


This will allow port 25 on a VPS with a (true) static IP (configured in 
/etc/conf.d/net) to receive TCP connections and forward them to your 
local mail server completely independent of what IP your local Pi may 
connect to the Internet with.


Your MX record(s) resolve to the IP address of the VPS.  You can change 
local IPs or ISPs or even country as often as you like.


Another more complex method is to use a more traditional VPN; e.g. GRE 
tunnel, IPsec tunnel, SSH L2 / L3 tunnel, OpenVPN, WireGuard and IP 
forwarding on the VPS to route the TCP connections to the local mail server.


Things quickly get deep in minutia of what method you want to use and 
what you want to go over it.


I think the SSH remote port forwarding is an elegant technique.  It's 
relatively simple and it has the added advantage that when the 
connection is down the VPS will not establish a TCP connection (because 
ssh is not listening on the remotely forwarded port) thus remote 
connecting systems will fail hard / fast, thus it's more likely to be 
brought to a human's attention.




--
Grant. . . .
unix || die



Re: [gentoo-user] Local mail server

2020-07-31 Thread Grant Taylor

On 7/30/20 3:05 AM, antlists wrote:

From what little I understand, IPv6 *enforces* CIDR.


Are you talking about the lack of defined classes of network; A, B, C, 
D, E?  Or are you talking about hierarchical routing?


There is no concept of a class of network in IPv6.

Hierarchical routing is a laudable goal, but it failed 15-20 years ago.

Each customer is then given one of these 64-bit address spaces for their 
local network. So routing tables suddenly become extremely simple - 
eactly the way IPv4 was intended to be.


Except that things didn't work out that way.

Provider Independent addresses, multi-homing, and redundant routes mean 
that hierarchical routing failed 15-20 years ago.


Many providers try to address things so that hierarchical routing is a 
thing within their network.  But the reality of inter-networking between 
providers means that things aren't as neat and tidy as this on the Internet.


This may then mean that dynDNS is part of (needs to be) the IPv6 spec, 
because every time a client roams between networks, its IPv6 address HAS 
to change.


Nope.

It's entirely possible to have clients roam between IPv6 (and IPv4) 
networks without (one of) it's address(es) changing.  Mobile IP.  VPNs. 
Tunnels.  BGP


Sure, the connection to the network changes as it moves from network to 
network.  But this doesn't mean that the actual IP address that's used 
by the system to communicate with the world changes.


Take a look at IPv6 Provider Delegation.  At least as Comcast does it, 
means that you only have a link-local IPv6 address on the outside and a 
/56 on the inside of a network.  The world sees the globally routed IPv6 
network on the inside and doesn't give 2¢ what the outside link-net IPv6 
address is.  Comcast routes the /56 they delegate to you via the 
non-globally-routed IPv6 link-net IPv6 address.


There are multiple ways to keep the same IP while changing the 
connecting link.




--
Grant. . . .
unix || die



Re: [gentoo-user] Local mail server

2020-07-31 Thread Grant Taylor

On 7/29/20 5:23 PM, james wrote:

Free static IPs?


Sure.

Sign up with Hurricane Electric for an IPv6 in IPv4 tunnel and request 
that they route a /56 to you.  It's free.  #hazFun


Note:: here in the US, it may be easier and better, to just purchase 
an assignment, that renders them yours.


Simply paying someone for IPs doesn't "render them yours" per say.

I'd be shocked if you do not have to pay somebody residual fees, 
just like DNS.


It is highly dependent on what you consider to be "residual fees".

Does the circuit to connect you / your equipment to the Internet count?

What about the power to run said equipment?

Does infrastructure you already have and completely paying for mean that 
adding a new service (DNS) to it costs (more) money?


Yes, there is annual (however it works out) rental on the domain name. 
But you can easily host your own DNS if you have infrastructure to do so on.


My VPS provider offers no-additional-charge DNS services.  Does that 
mean that it's free?  I am paying them a monthly fee for other things. 
How you slice things can be quite tricky.



So sense there seems to be interest from several folks,
I'm all interested in how to do this, US centric.


I think the simplest and most expedient is to get a Hurricane Electric 
IPv6-in-IPv4 tunnel.



Another quesiton. If you have (2) blocks of IP6 address,
can you use BGP4 (RFC 1771, 4271, 4632, 5678,5936 6198 etc ) and other 
RFC based standards  to manage routing and such multipath needs?


Conceptually?  Sure.

Minutia:  I don't recall at the moment if the same version of the BGP 
protocol handles both IPv4 and IPv6.  I think it does.  But I need more 
caffeine and to check things to say for certain.  Either way, I almost 
always see BGPv4 and BGPv6 neighbor sessions established independently.


There is a fair bit more that needs to be done to support multi-path in 
addition to having a prefix.


Who enforces what carriers do with networking. Here 
in the US, I'm pretty sure it's just up to the the 
Carrier/ISP/bypass_Carrier/backhaul-transport company)


Yep.

There is what any individual carrier will do and then there's what the 
consensus of the Internet will do.  You can often get carriers to do 
more things than the Internet in general will do.  Sometimes for a fee. 
Sometimes for free.  It is completely dependent on the carrier.


Conglomerates with IP resources, pretty much do what they want, and they 
are killing the standards based networking. If I'm incorrect, please 
educated me, as I have not kept up in this space, since selling my ISP 
more than (2) decades ago.


Please elaborate on what you think the industry / conglomerates are 
doing that is killing the standards based networking.


The trump-china disputes are only accelerating open standards for 
communications systems, including all things TCP/IP.


Please elaborate.



--
Grant. . . .
unix || die



Re: [gentoo-user] Re: Local mail server

2020-07-31 Thread Grant Taylor

On 7/30/20 5:38 PM, Ralph Seichter wrote:
I'd be interested to hear from users who still need to pay extra 
for IPv6.


I'd be willing, if not happy, to pay a reasonable monthly fee to be able 
to get native IPv6 from my ISP.


But it's 2020 and my ISP doesn't support IPv6 at all.  :-(

As such, I use a tunnel for IPv6.



--
Grant. . . .
unix || die



Re: [gentoo-user] Re: Local mail server

2020-07-31 Thread Grant Taylor

On 7/29/20 1:28 PM, Grant Edwards wrote:
I don't know what most ISPs are doing.  I couldn't get IPv6 via 
Comcast (or whatever they're called this week) working with OpenWRT 
(probably my fault, and I didn't really need it). So I never figured 
out if the IPv6 address I was getting was static or not.


Ya  That was probably a DHCPv6 for outside vs DHCPv6 Provider 
Delegation (PD) issue.  I remember running into that with Comcast.  I 
think for a while, they were mutually exclusive on Comcast.


There is DHPCv6 (I've implemented it), but I have no idea if anybody 
actually uses it.  Even if they are using DHCPv6, they can be using 
it to hand out static addresses.


I've seen DHCPv6 used many times.  It can be stateless (in combination 
with SLAAC to manage the address) or stateful (where DHCPv6 manages the 
address).  Either way, there is a LOT more information that can be 
specified with DHCPv6 that simple SLAAC doesn't provide.  For a long 
time you couldn't dynamically determine DNS server IP addresses without 
DHCPv6 or static configuration.


The assumption always seemed to be that switching to IPv6 meant the 
end of NAT


That's what the IPv6 Zealots want you to think.


and the end of dynamic addresses.


Nope, not at all.



--
Grant. . . .
unix || die



Re: [gentoo-user] Re: Local mail server

2020-07-31 Thread Grant Taylor

On 7/29/20 9:41 AM, Peter Humphrey wrote:

Aren't all IPv6 addresses static?


No.

SLAAC and DHCPv6 are as dynamic as can be.

Static is certainly an option.  But I see SLAAC and DHCPv6 used frequently.



--
Grant. . . .
unix || die



Re: [gentoo-user] Local mail server

2020-07-28 Thread Grant Taylor

On 7/28/20 5:18 PM, james wrote:
If you know a way around this, with full privileges one gets with static 
IP addresses, I'm all ears.?


A hack that I see used is to pick up a small VPS for a nominal monthly 
fee and establish a VPN to it.  Have it's IP (and ports) directed 
through the VPN to your local system.  You get just about everything, 
save for what's specifically needed for the VPN.




--
Grant. . . .
unix || die



Re: [gentoo-user] Local mail server

2020-07-19 Thread Grant Taylor

On 7/19/20 8:18 AM, Peter Humphrey wrote:

Afternoon all,


Hi,

I'd like to set up a little box to be a local mail server. It would 
receive mails from other machines on the LAN, and it would fetch 
POP3 mail from my ISP and IMAP mail from google mail. KMail on my 
workstation would then read the mails via IMAP. That's all. I might 
want to add a few extras later, such as receiving SMTP mail for a 
.me domain I own. My present total of emails is about 4000.


That should be quite possible to do.

IMHO there's not much difference in an internal only and an externally 
accessible mail server as far as the software & configuration that's on 
said server.  The only real difference is what the world thinks of it.


I used to have a working system on a box that's now deceased 
[1], but in replicating it I'm having difficulty threading my 
way through the mutually inconsistent Gentoo mail server docs, 
omitting the bits I don't need and interpreting the rest. Bits I 
don't need? Database backend, web-mail access, web admin tools, 
fancy multi-user authorisation, any other baroque complexity.


There are a LOT of ways to do this.  You need to pick the program that 
you want to use for various functions:


 - SMTP: Sendmail (my preference), Postfix (quite popular), etc.
 - IMAP: Courier (my preference), Dovecot (quite popular), etc.
 - POP3: Courier, Dovecot (?), QPopper (?), etc.
 - LDA: Procmail (my preference), delivermail, etc.

Pick the programs that you want to run, possibly influenced by what they 
do and don't support to find an overlap that works.  E.g. Maildir used 
to be less well supported than it is today.


You have already indicated that you want to use fetchmail (or something 
like it).


So I'm asking what systems other people use. I can't be unusual in what 
I want, so there must be lots of solutions out there somewhere. Would 
anyone like to offer me some advice?


I actually think it's more unusual to want to run an email server that 
doesn't receive email directly from the world vs one that does.  But 
whatever you want.


As others have alluded to, sending email may be tricky, but ultimately 
possible to do.  It will have a LOT to do with what domain name you use, 
and if you have your server smart host through something else.


1.  Yes, of course I did have backups, but in juggling the media I 
managed to lose them. A world of advice to others: don't grow old.  :)


Oops!



--
Grant. . . .
unix || die



Re: [gentoo-user] ssh defaults to coming in as user "root"?

2020-07-11 Thread Grant Taylor

On 7/10/20 11:12 PM, Walter Dnes wrote:

   Would the following activity trigger creation of .ssh/config ??


If I'm reading your sequence of events properly, no, they should not 
alter your desktop's SSH config to cause it to try to log into the 
notebook as the root user.





--
Grant. . . .
unix || die



Re: [gentoo-user] ssh defaults to coming in as user "root"?

2020-07-10 Thread Grant Taylor

On 7/10/20 6:36 PM, Walter Dnes wrote:

   The question is how did .ssh/config ever get there in the first place?


Seeing as how there is a Host entry with your notebook's name, I can 
only speculate that you, or something you ran, put it there.


I find the KeyAlgorithms line to be atypical as well.

Is there a chance that you used a fancy wrapper, possibly menu driven, 
that might have updated the ~/.ssh/config file?




--
Grant. . . .
unix || die



Re: [gentoo-user] SSH xterm not working properly during install

2020-07-07 Thread Grant Taylor

On 7/7/20 10:40 PM, Walter Dnes wrote:

Thanks, I missed that.  I'll try again and see how it goes.


If you continue to have problems, I would very much like to know the 
particulars.


My experience has been that changing the TERM environment variable has 
had very little success in fixing things like this.


In fact, the only way that I see it working is if you are TERM is set to 
something MASSIVELY wrong for your actual terminal.  I.e. trying to send 
fancy xterm / ANSI control sequences to an old dumb terminal like a VT100.


Seeing as how ANSI / XTERM* is effectively a significant super set of 
VT100, setting the TERM variable to an older less capable value does 
little other than limit features.


That being said, I have seen problems when there is a size mismatch 
between what's running on the host and the terminal.  Sort of like if 
you re-sized the window after starting nano (et al.).  Things can get 
weird then.


Beyond any of this, I'd be quite curious what problems you're having.



--
Grant. . . .
unix || die



Re: [gentoo-user] Converting a Portage Flat File to a Directory Structure

2020-04-21 Thread Grant Taylor



On 4/20/20 7:13 AM, Ashley Dixon wrote:

Hi gentoo-user,


Hi,


Following the recent conversation started by Meino, I have decided to
convert my package.* files to directory structures.  For  all  but
one,  this  has  proven tedious, but relatively painless.  My
package.use file is another story: at over three-hundred lines, the
thought of manually  converting  this  to  a  directory structure
does not attract me.

Are there any tools in Portage to help with this, or must I resort to
writing  a shell script ?


I'm not aware of a tool to do the conversion.  However there may be one 
that I'm not aware of.



For example, considering the following lines in my flat package.use:

 media-video/openshot printsupport
 sys-apps/util-linux tty-helpers

I want to take this file and create a directory structure:

 media-video/openshot, containing "media-video/openshot printsupport"
 sys-apps/util-linux, containing "sys-apps/util-linux tty-helpers"


I wasn't aware that you could put sub-directories in 
/etc/portage/package.use.  I've always had to put the files directly in 
that directory, not sub-directories.  As such, my files have names like 
sys-apps-util-linux to avoid naming collisions.  Perhaps things have 
changed since I last tried to use a sub-directory or I am misremembering.



How can this be done ?


I think it should be relatively easy to script reading the line, 
extracting the package name, munging the name, and writing the entire 
unmodified line to a new file based on the munged name.  If directories 
work, create and populate them without munging names.




--
Grant. . . .
unix || die



Re: [gentoo-user] best rss reader?

2020-04-19 Thread Grant Taylor

On 4/19/20 3:15 PM, Caveman Al Toraboran wrote:

hi - could everyone share his rss reading setup?


Hi,


 1. what rss feed reader do you use?


  Primary:  rss2email
Secondary:  Thunderbird


 2. what are your theoretical principles that
guided you to choose the rss feed that you
use.


I want to read things on multiple devices, and IMAP based email does 
extremely well at that.


I have select few things that I don't run through rss2email that I just 
read via Thunderbird's RSS support.




--
Grant. . . .
unix || die



Re: [gentoo-user] Alternate Incoming Mail Server

2020-04-11 Thread Grant Taylor

On 4/11/20 4:13 PM, antlists wrote:
Which was also a pain in the neck because it was single-threaded - if 
the ISP tried to send an incoming email at the same time the gateway 
tried to send, the gateway hung.


Ew.  I can't say as I'm surprised about that, given the nature of SMTP 
servers in the '90s.


I wonder what the licensing would have been to have one machine sending 
outbound and another receiving inbound.  Though that does assume that 
you could have multiple SMTP gateways connected to MS-Mail.


You could pretty much guarantee most mornings I'd be in the server 
room deleting a bunch of private emails from the outgoing queue, 
and repeatedly rebooting until the queues in both directions managed 
to clear.


Oy vey!

The point is that when the server sends EHLO, it is *not* a *permitted* 
response for the client to drop the connection.


That was the specification for ESMTP - the client should reject EHLO, 
the server tries again with HELO, and things (supposedly) proceed as 
they should. Which they can't, if the client improperly kills the 
connection.


Agreed.

Which shouldn't have been a problem. ESMTP was designed to fall back 
gracefully to SMTP. But if clients don't behave correctly as per the 
SMTP spec, how can the server degrade gracefully?


I wonder how many Sun Sparc boxen were put between Microsoft mail 
infrastructure and the rest of the world in the '90s and '00s.




--
Grant. . . .
unix || die



Re: [gentoo-user] Alternate Incoming Mail Server

2020-04-11 Thread Grant Taylor

On 4/11/20 2:17 PM, Michael Orlitzky wrote:

Exchange used to do all manner of stupid things, but now that Microsoft
is running it themselves and making money from O365, they seem to have
figured out how to make it send mail correctly.


I've found that Exchange / IIS SMTP is fairly standards compliant since 
the early-mid 2000s.


Microsoft has always been making money off of Exchange.  (Presuming 
people are being legal about it.)  Be it CALs, upgrade licensing, etc.


Nowadays they prefer to cripple Outlook with non-Exchange protocols, 
so that our users complain about not having shared calendars when 
we've had CalDAV integrated with IMAP for 10+ years.


CalDAV is decidedly not an email protocol; POP3, IMAP, SMTP.

I'm not aware of Outlook ever claiming support for CalDAV.  It has 
supported POP3, IMAP, SMTP, Exchange proprietary protocols, and likely NNTP.


You can get shared calendaring, address books, and folders without 
Exchange.  I used MAPIlab's Colab product for a number of clients for 
many years circa 2010.  It is (was?) an Outlook add-on that added 
support for accessing a shared PST file.  It worked great in multiple 
client's offices of between 5 and 25 people.  (My bigger clients had 
Exchange.)


So, IMHO, complaining that Outlook doesn't support CalDAV is sort of 
like complaining that Firefox doesn't support SIP telephony.  Could it 
if it wanted to, sure.  Should it, maybe.  Does it, no.




--
Grant. . . .
unix || die



Re: [gentoo-user] Alternate Incoming Mail Server

2020-04-11 Thread Grant Taylor

On 4/11/20 2:08 PM, antlists wrote:
Okay, it was a long time ago, and it was MS-Mail (Exchange's 
predecessor, for those who can remember back that far), but I had an 
argument with my boss. He was well annoyed with our ISP for complying 
with RFC's because they switched to ESMTP and MS-Mail promptly broke.


I don't recall any RFC (from the time) stating that ESMTP was REQUIRED. 
It may have been a SHOULD.


The ISP chose to make the change that resulted in ESMTP.

Also, I'm fairly sure that MS-Mail didn't include SMTP in any capacity. 
It required an external MS-Mail SMTP gateway, which I Microsoft did 
sell, for an additional cost.


The *ONLY* acceptable reason for terminating a connection is when you 
recieve the command "BYE", so when Pipex sent us the command EHLO, 
MS-Mail promptly dropped the connection ...


I'll agree that what you're describing is per the (then) SMTP state 
machine.  We have sense seen a LOT of discussion about when it is proper 
or not proper to close the SMTP connection.


If the MS-Mail SMTP gateway sent a 5xy error response, it could see how 
it could subsequently close the connection per protocol state machine.


Pipex, and I suspect other ISPs, had to implement an extended black list 
of customers who couldn't cope with ESMTP.


If the MS-Mail SMTP gateway hadn't closed the connection and instead 
just returned an error for the command being unknown / unsupported, 
Pipex would have quite likely tried a standard HELO immediately after 
getting the response.


Also, we're talking about the late '90s during the introduction of 
ESMTP, which was a wild west time for SMTP.




--
Grant. . . .
unix || die



Re: [gentoo-user] Re: How to disable fuzzy-search in emerge?

2020-04-10 Thread Grant Taylor

On 4/10/20 11:00 AM, Grant Edwards wrote:

Yes, that works!


Good.


Thanks!!


You're welcome.

I don't know why it didn't occur to me to check for a make.conf 
variable instead of an environment variable or USE flag.  Of course 
now that I know that make.conf variable's name, I have found it in

few other places in the emerge man page, and there's a clear
description of it in make.conf(5).

Unfortunately, the emerge man page doesn't really discuss make.conf 
except for a few places where it's mentioned that some specific

option can be controlled via make.conf.


I often feel like the make.conf file is a collection of environment 
variables, almost as if the file was sourced prior to running emerge. 
As such, I have to realize that if something looks like an environment 
variable, it can probably go into the make.conf as such.



--
Grant


I took pause for a moment wondering if this was something I typed or 
not.  ;-)




--
Grant. . . .
unix || die



Re: [gentoo-user] How to disable fuzzy-search in emerge?

2020-04-10 Thread Grant Taylor



On 4/10/20 10:08 AM, Grant Edwards wrote:
Yes, I'm aware I can add "--fuzzy-search n" to make it act sane, but 
is there an environment variable or USE flag or _something_ to make 
emerge --search do the right thing by default?

Does adding it to EMERGE_DEFAULT_OPTS in /etc/portage/make.conf help?



--
Grant. . . .
unix || die



Re: [gentoo-user] Re: Alternate Incoming Mail Server

2020-04-08 Thread Grant Taylor

On 4/8/20 4:06 PM, Michael Orlitzky wrote:
The driving force behind junkemailfilter.com passed away almost two 
years ago:


Hum.

That doesn't call the technology behind it into question.  Though it 
does call into question the longevity of it.


Maybe prematurely (?), I removed their lists from our servers 
shortly thereafter. You should check if that service is still 
doing anything. It would be quite bad, for example, if the domain 
junkemailfilter.com expired and if someone else bought it and decided 
to start accepting your email.


Valid concern.

Perhaps it's time for me to start creating my own custom SMTP engine to 
do the same types of tests that JEF-PT does / did.




--
Grant. . . .
unix || die





--
Grant. . . .
unix || die



Re: [gentoo-user] Re: Alternate Incoming Mail Server

2020-04-08 Thread Grant Taylor

On 4/8/20 3:36 PM, Neil Bothwick wrote:

So does that mean you have four MX records?


Yes.


Nolist server
Primary MX
Backup MX
Project Tar server

in order of decreasing priority?


Exactly. (1)

$ dig +short +noshort mx tnetconsulting.net | sort
tnetconsulting.net. 604800  IN  MX  10 graymail.tnetconsulting.net.
tnetconsulting.net. 604800  IN  MX  15 tncsrv06.tnetconsulting.net.
tnetconsulting.net. 604800  IN  MX  20 tncsrv05.tnetconsulting.net.
tnetconsulting.net. 604800  IN  MX  99 tarbaby.junkemailfilter.com.

(1) They aren't always returned in order do to round-robin DNS.  But 
things that use MX records know to sort based on MX weight.




--
Grant. . . .
unix || die



Re: [gentoo-user] Re: Alternate Incoming Mail Server

2020-04-08 Thread Grant Taylor

On 4/8/20 7:39 AM, Grant Edwards wrote:
NB: The cheap VPS instances that I work with do have static IP 
addresses, but they share that static IP with a bunch of other VPS 
instances.  If you want your VPS to have a non-shared static IP 
address, then make sure that's what you're signing up for (it costs 
more).


I think we're thinking two different things for VPS.  I'm thinking 
Virtual Private Server, as in a Virtual Machine.


I've not seen any Virtual Private Servers that re-use the same IP as 
other Virtual Private Servers.


It sounds to me like you might be talking about Virtual Hosting of web 
sites, which do tend to put multiple web sites on the same IP.




--
Grant. . . .
unix || die



Re: [gentoo-user] Alternate Incoming Mail Server

2020-04-07 Thread Grant Taylor

On 4/7/20 4:53 AM, Ashley Dixon wrote:
Grant's mail server, I assume, is configured with the highest security 
in mind, so I can see how a mail server with a dynamic I.P. could 
cause issues in some contexts.


I don't do any checking to see if the IP is from a dynamic net block or 
not.  Some people do.


I just wish my I.S.P. offered _any_ sort of static I.P. package, 
but given that I live in remote area in the north of England, I.S.P.s 
aren't exactly plentiful.


If all you're after is a static IP and aren't worried about sending 
email from it, you can get a cheap VPS and establish a VPN from your 
house to it.  Use the static IP of said VPS as your home static IP.  }:-)




--
Grant. . . .
unix || die





--
Grant. . . .
unix || die



Re: [gentoo-user] Alternate Incoming Mail Server

2020-04-07 Thread Grant Taylor

On 4/6/20 10:49 PM, J. Roeleveld wrote:
I am afraid most (if not all) ISPs will reject emails if the reverse 
DNS does not match.


My experience has been that there needs to be something for both the 
forward and reverse DNS.  Hopefully they match each other and — and what 
I call — round resolve each other.  Ideally, they round resolve /and/ 
match the SMTP HELO / EHLO name.


I think you can get away with at least the first part.  There will 
likely be warnings, but they probably won't prevent email delivery in 
and of themselves.


Using a dynamic range is another "spam" indicator and will also get 
your emails blocked by (nearly) all ISPs.


Yep.

If it's not blatant blocking of believed to be dynamic clients (how is 
left up to the reader's imagination), you start to run into additional 
filtering that may or may not reject the message.


I would suggest putting your outbound SMTP server on a cheap VM hosted 
somewhere else. Or you get an outbound SMTP-service that allows you 
to decide on domain name and email addresses.


Unfortunately the spammers have made many such cheap VMs IP net blocks 
have bad reputations.  I'm starting to see more people blocking the 
cheaper VPS providers.




--
Grant. . . .
unix || die



Re: [gentoo-user] Alternate Incoming Mail Server

2020-04-06 Thread Grant Taylor

On 4/6/20 3:17 PM, Ashley Dixon wrote:

Hello,


Hi,

[O.T.] Unfortunately, Grant, I cannot reply to your direct e-mail. My 
best guess is that you have a protection method in place in the 
event that the reverse D.N.S.\ does not match the forward ?


You're close.  I do require reverse DNS.  I will log a warning if 
forward and reverse don't match, but the email should still flow.  Lack 
of reverse DNS is problematic though.


As I'm on a domestic I.P., this is out of my control (i.e., `nslookup 
mail.suugaku.co.uk` returns my I.P., but `nslookup ` returns 
some obscure hostname provided by my I.S.P.).


Oops!

Been there.  Done that.

I've added your mail server's name & IPv4 & IPv6 addresses to my 
/etc/hosts file.  Please try again.  I'll also send you an email from an 
alternate address.


Sadly, it doesn't look like you can use the hack that I've used in the past.

If forward and reverse DNS do match , you can configure your 
outgoing email server to simply use that name when talking to the 
outside world.


Unfortunately, it doesn't look like you can do that because the forward 
DNS doesn't return an A /  record for the name name that the PTR 
returns.



This sounds quite enticing; I'll have a look, thanks :)


:-)

I didn't mean to infer that my back-up server would be different to my 
primary server, as my primary is rather minimal. And yes, good point, 
I suppose if anything, I should have tougher anti-spam measures on 
my backup MX :)


For simplicity and consistency sake, I'd encourage you to have the same 
spam / virus / hygiene filtering on all mail servers that you control.


This is what I was intending to do. I hadn't even considered 
dynamically playing with the D.N.S., given that addresses are commonly 
cached for a short period to avoid hammering name-servers (?)


You have influence on how long things are cached for by adjusting the 
TTL value in your DNS.


I say "influence" vs "control" because not all recursive DNS servers 
honor the TTL value that you specify.  Some servers set a lower and / or 
upper bound on the TTL that they will honor.


Oh my goodness, I feel silly now :) I was considering just using 
courier to catch the incoming mail, and then rsync it over to my 
primary when it comes back on-line,


I supposed that you could have a full functional mail server, including 
delivering to mailboxes and then synchronizing the mailboxes between the 
servers.  But that would be more work, and I'm guessing that's contrary 
to the simple alternate server that I think you are after.



but using an S.M.T.P.-forwarder certainly seems more elegant.


;-)

Cheers for your help and detailed explanations Grant. Not only will 
your suggestions make my humble mail server operate better, but it's 
also great fun to set up :)


You're quite welcome.



--
Grant. . . .
unix || die



Re: [gentoo-user] Re: Alternate Incoming Mail Server

2020-04-06 Thread Grant Taylor

On 4/6/20 1:16 PM, Michael Orlitzky wrote:
Greylisting suffers from one problem that unplugging the server 
doesn't: greylisting usually works on a triple like (IP address, 
sender, recipient), and can therefore continue to reject people who do 
retry, but retry from a different IP address. This occasionally affects 
things like Github notifications and requires manual intervention.


I used to be a strong advocate of greylisting.  I had some of the 
problems that have been described.  Then I switched to Nolisting, a 
close varient of greylisting that I haven't seen any of the same (or 
any) problems with.


If a sending email server follows the RFCs and tries multiple MXs, then 
email gets through in seconds instead of having to re-try and wait for a 
greylist timeout.


There's also no issue with (re)trying from different IPs.

(I would still recommend greylisting, personally, but it's a harder 
sell than that of foregoing a secondary MX.)


Greylisting, or better, nolisting is a very good thing.

See my other email for why I disagree about foregoing additional MXs.



--
Grant. . . .
unix || die



Re: [gentoo-user] Re: Alternate Incoming Mail Server

2020-04-06 Thread Grant Taylor

On 4/6/20 1:03 PM, Rich Freeman wrote:
More often than not, yes.  The main exception I've seen are sites 
that email you verification codes, such as some sorts of "two-factor" 
implementations (whether these are really two-factor I'll set aside 
for now).  Many of these services will retry, but some just give up 
after one attempt.


I believe that's a perfect example of services that should send email 
through a local MTA that manages a queue and retries mail delivery. 
There is no need for this type of queuing logic and complexity to be in 
the application.  Especially if the application is otherwise stateless 
and runs for the duration of a single HTTP request.




--
Grant. . . .
unix || die



Re: [gentoo-user] Alternate Incoming Mail Server

2020-04-06 Thread Grant Taylor

On 4/6/20 11:55 AM, Michael Orlitzky wrote:

Ok, you're right.


;-)

My suggestion to create multiple records was in response to the claim 
that there are MTAs that will try a backup MX, but won't retry the 
primary MX, which is false to begin with. Trying to argue against an 
untrue premise only muddied the water.


I will not exclude the possibility of email sending programs that will 
use backup MX(s) and not use the primary MX.  But these are not general 
purpose MTAs.



Back to the point: all real MTAs retry messages. You can find bulk
mailers and spam zombies and one or two java "forgot password" webforms
from 1995 that won't retry; but by and large, everyone does.


By and large, yes all well behaved MTAs do re-try.

The web form is a classic example of what a local queuing MTA is good for.

If anyone has evidence to the contrary, it should be easy to find 
and present. If, on the other hand no such MTAs exist, then it's 
quite pointless to run a second MX for the five seconds a year that 
it's useful.


I disagree.

  · I've seen connectivity issues break connection between a sending 
host and a primary MX.

 · Receiving side
 · Sending side
 · Breakage in the Internet (outside of the direct providers on the 
ends)

 · combination of the above
  · It's not five seconds a year.
 · It's more likely an hour or two a year, possibly aggregated.
  · You can't control the retry time frame on the sending side.
  · You can control the retry / forward time on secondary MX(s).
  · Messages can be canceled while sitting in sending systems queues.
  · It's much more difficult for someone to interfere with messages 
sitting on your systems.

 · Especially without your knowledge.
  · It's /still/ considered best practice to have /multiple/ inbound MX 
servers.

 · Be it primary / secondary
 · Anycasted instance
 · Other tricks
  · Do you want to tell the CEO that he can't have his email for 36 
hours because you patched the mail server and the sender's system won't 
retry for 35.5 hours?


My professional and personal opinion is that if you're serious about 
email, then you should have multiple inbound MX servers.




--
Grant. . . .
unix || die





--
Grant. . . .
unix || die



Re: [gentoo-user] Alternate Incoming Mail Server

2020-04-06 Thread Grant Taylor

On 4/6/20 11:14 AM, Michael Orlitzky wrote:

Why don't you say which MTA it is that both (a) combines MX records with
different priorities, and (b) doesn't retry messages to the primary MX?


You seem to have conflated the meaning of my message.

I only stated that I've seen multiple MTAs to (a) combine MX records.

I never said that those MTAs also didn't (b) retry message delivery.

I have seen the following MTAs do (a) in the past:

 · Microsoft Exchange
 · IIS SMTP service
 · Novell GroupWise
 · Lotus Domino
 · Sendmail
 · Postfix

Those are the ones that come to mind.  I'm sure there are others that I 
ran into less frequently.


My understanding from researching this years ago is that it used to be 
considered a configuration error to have multiple MX records with names 
that resolve to the same IP.  As such, MTAs would combine them / 
deduplicated them to avoid wasted resource consumption.  There is no 
point trying to connect to the same IP, even by a different name, an 
additional time after the previous time failed during the /current/ 
delivery / queue cycle.




--
Grant. . . .
unix || die



Re: [gentoo-user] Alternate Incoming Mail Server

2020-04-06 Thread Grant Taylor

On 4/6/20 6:35 AM, Ashley Dixon wrote:

Hello,


Hi,

After many hours of confusing mixtures of pain and pleasure, I have 
a secure and well-behaved e-mail server which encompasses all the 
features I originally desired.


Full STOP!

I hoist my drink to you and tell the bar keep that your next round is on me.

Very nicely done!!!

In all seriousness, running your own email server is definitely not 
easy.  DNS, web, and database servers are easier.


This is especially true, by an order of magnitude, if you are going to 
be sending email and do all of the necessary things to get other mail 
receivers to be happy with email outbound from your server.


~hat~tip~

However, in the event that I need to reboot the server (perhaps a 
kernel update was added to Portage), I would like to have a miniature 
mail server which catches incoming mail if, and only if, my primary 
server is down.


Okay

I have Gentoo installation on an old Raspberry Pi (model B+), and was 
curious if such a set-up was possible ?


Can you get a Raspberry Pi to function as a backup server?  Yes.  Do you 
want to do such, maybe, maybe not.


I've seen heavier inbound email load on my backup mail server(s) than I 
have on my main mail server.  This is primarily because some, 
undesirables, send email to the backup email server in the hopes that 
there is less spam / virus / hygiene filtering there.  The thought 
process is that people won't pay to license / install / maintain such 
software on the ""backup email server.


I encourage you to take a look at Junk Email Filter's Project Tar [1].

Aside:  JEF-PT encourages people to add a high order MX to point to 
JEF-PT in the hopes that undesirable email to your domain will hit their 
MX, which will always defer the email and never accept it.  Their hope 
is to attract as many bad actors to their system as they can, where they 
analyze the behavior of the sending system; does it follow RFCs, does it 
try to be a spam cannon, etc.  They look at the behavior, NEVER content, 
and build an RBL.  The provide this RBL for others to use if they 
desire.  —  I have been using, and recommending, JEF-PT for more than a 
decade.


JEF-PT could function as the backup MX in a manner of speaking.  They 
will never actually accept your email.  But they will look like another 
email server to senders.  As such, well behaved senders will queue email 
for later delivery attempts.


I also want the solution to be as minimal as possible. I see the 
problem as three parts:


This type of thinking is how you end up with different spam / virus / 
hygiene capabilities between the primary and secondary email systems. 
Hence why many undesirables try secondary email system(s) first.  ;-)


In for a penny, in for a pound.

If you're going to run a filter on your primary mail server, you should 
also run the filter on your secondary mail server(s).


(a) Convincing the D.N.S.\ and my router to redirect mail to the 
alternate server, should the default one not be reachable;


DNS is actually trivial.  That's where multiple MX records come in to 
play.  —  This is actually more on the sending system honoring what DNS 
publishes than it is on the DNS server.


Aside:  Were you talking about changing what DNS publishes dynamically 
based on the state of your email server?  If so, there is a lot more 
involved with this, and considerably more gotchas / toe stubbers to deal 
with.


There are some networking tricks that you can do in some situations to 
swing the IP of your email server to another system.  This assumes that 
they are on the same LAN.


 · VRRP is probably the simplest one.
 · Manually moving also works, but is less simple.
 · Scripting is automated manual.
 · Routing is more complex.
· Involves multiple subnets
· May involve dynamic routing protocols
· Manual / scripting 
 · NAT modification is, problematic

(b) Creating the alternate mail server to be as lightweight 
as possible. I'm not even sure if I need an S.M.T.P.\ server 
(postfix). Would courier-imap do the trick on its own (with 
courier-authlib and mysql) ?


You will need an SMTP server, or other tricks ~> hacks.  Remember that 
you're receiving email from SMTP servers, so you need something that 
speaks SMTP to them.


Courier IMAP & authlib are not SMTP servers.  I sincerely doubt that 
they could be made to do what you are wanting.


(c) Moving mail from the alternate server to the main server once 
the latter has regained conciousness.


SMTP has this down pat in spades.

This is actually what SMTP does, move email from system to system to 
system.  You really are simply talking about conditinally adding another 
system to that list.


Hint:  SMTP is the industry standard solution for what you're wanting to 
do, /including/ getting the email from the alternate server to the main 
server.


I realise this is a slightly different problem, and is not even 
necessarily _required_ for operation, although it's certainly a 

Re: [gentoo-user] Alternate Incoming Mail Server

2020-04-06 Thread Grant Taylor

On 4/6/20 10:43 AM, Michael Orlitzky wrote:
Well, I can't refute an anecdote without more information, but if 
you're worried about this you can create the same MX record twice so 
that the "backup" is the primary.


That's not going to work as well as you had hoped.

I've run into many MTAs that check things and realize that the hosts in 
multiple MX records resolve to the same IP and treat them as one.


You may get this to work.  But I would never let clients to rely on this.



--
Grant. . . .
unix || die



Re: [gentoo-user] Alternate Incoming Mail Server

2020-04-06 Thread Grant Taylor

On 4/6/20 10:19 AM, J. Roeleveld wrote:

I find that, with a backup MX, I don't seem to loose emails.


Having multiple email servers of your own, primary, secondary, tertiary, 
etc, makes it much more likely that the email will move from the sending 
systems control to your control.  I think it's fairly obvious that 
having the inbound email somewhere you control it is self evident.


I've run into poorly configured sending servers that will try 
(re)sending once a day / week.  So, you would be waiting on them for 
that day / week to re-send, presuming they do re-send.


Conversely, if you have your own secondary mail server, the inbound 
message will likely land at the secondary mail server seconds ~> minutes 
after being unable to send to the primary mail server.  Thus when your 
primary mail server comes back online minutes later, your secondary can 
send it on to your primary.  Thus a net delay of minutes vs hours / days 
/ week(s).


I have, however, found evidence of mailservers belonging to big ISPs 
not retrying emails if there is no response from the singular MX.


I've not noticed this, but I've not been looking for it, and I've had 
secondary email servers for decades.


Note:  You don't have to have and manage your own secondary mail server. 
 You can outsource the task to someone you trust.  The primary thing is 
that there are multiple servers willing to accept responsibility for 
your email.  Be they your servers and / or someone else's servers acting 
as your agent.


General call to everybody:  If you're an individual and you want a 
backup (inbound relay) email server, send me a direct email and we can 
chat.  I want to do what I can to help encourage people to run their own 
servers.  I'm happy to help.




--
Grant. . . .
unix || die



Re: [gentoo-user] ...recreating exactly the same applications on a new harddisc?

2020-04-04 Thread Grant Taylor

On 4/4/20 11:34 AM, tu...@posteo.de wrote:

Hi,


Hi,

I am currently preparing a new harddisc as home for my new Gentoo 
system.


Is it possible to recreate exactlu the same pool of 
applications/programs/libraries etc..., which my current system have - 
in one go?


Baring cosmic influences, I would expect so.

That is: Copy  from the current system into the chroot 
environment, fire up emerge, go to bed and tommorow morning the new 
system ready...?


Does this  exists and is it reasonable to do it this way?

Thanks for any hint in advance!


I think that any given system is the product of it's various components. 
 Change any of those components, and you change the product.


I see the list of components as being at least:

 · world file
 · portage config (/etc/portage)
· USEs
· accepted keywords
· accepted licenses
 · portage files (/usr/portage)
· this significantly influences the version of packages that get 
installed, which is quite important

 · kernel
· version
· config

Copying these things across should get you a quite similar system.  I 
suspect you would be down to how different packages are configured.


But the world file is only one of many parts that make up the system.

I didn't include distfiles because theoretically, you can re-download 
files.  However, I've run into cases where I wasn't able to download 
something and had to transfer (part of) distfiles too.


If you're going to the trouble to keep a system this similar, why not 
simply copy the system from one drive / machine to another?




--
Grant. . . .
unix || die



Re: [gentoo-user] Re: mail cannot send emails (trying to use it with smartd)

2020-04-03 Thread Grant Taylor

On 4/3/20 4:01 PM, Grant Edwards wrote:
If you want to become an ultra-professional, that's fine.  If you 
just want to be able to send mail interactively from mutt...


OK, that's a bad example now that mutt has built-in SMTP client 
capabilities.


How about ... if you only want to get email off your local box to a 
remote server and don't care how it's done, then Sendmail probably isn't 
your best / easiest / first choice.  ;-)




--
Grant. . . .
unix || die



Re: [gentoo-user] Re: mail cannot send emails (trying to use it with smartd)

2020-04-03 Thread Grant Taylor

On 4/2/20 10:47 PM, Caveman Al Toraboran wrote:

wow, didn't know sendmail's syntax was so hard it needed a compiler
:D thank you very much for your help.  highly appreciated.

I think that's an inaccurate statement.

First, m4 is a macro package, not a compiler.

Second, the macros are used to reduce the size of the macro config (mc) 
file to something manageable instead of hundreds of lines that are easy 
to cause a syntax mistake.


Third, the macros made the mc file more semantics in nature instead of 
Sendmail config file (cf) specific.  Meaning that one line allows 
changing values in multiple places that use the common information, like 
the domain name(s).


My home system has a 33 line mc file (including comments) that expands 
to 1915 lines of cf file (including comments).


Both the input and output is ASCII text.  Humans can read both of them. 
 Humans that understand cf syntax can read both files.


This is far from turning something into byte code.



--
Grant. . . .
unix || die



Re: [gentoo-user] Re: mail cannot send emails (trying to use it with smartd)

2020-04-03 Thread Grant Taylor

On 4/2/20 8:23 PM, Grant Edwards wrote:
It's very powerful but the configuration file format is almost 
impossible to understand, so people developed an m4 application that 
accepted a _slightly_ less cryptic language and generated the 
sendmail configuration file.

The configuration file is far from impossible to understand.

Calling it a configuration file is sort of a misnomer in that it's more 
of a programing language than a configuration file.  Some have even said 
that it's Turning complete.


It does have some things going against it though.

1)  It is highly dependent on the tab character (one or more) being used 
to separate two parts of specific lines.  This is easily visually lost 
as well as frequently lost with bad copy & paste.  But it's trivial to 
know about and correct.  (Compare that to Python that has lost all of 
it's leading white space.)


2)  The "config" file is really multiple sub-routine that are called in 
specific instances and do very specific things.  You must know which one 
you need to use when.


3)  Sendmail's logic is different than what most people are used to. 
It's not quite RPN.  But it's different enough that many people have 
problems with it.  I think it's more like relay logic.  Each line in a 
rule-set has an opportunity to apply to the current working space.  Each 
line can modify the working space, possibly directly or indirectly by 
calling other things.  If you aren't careful, the working set can be 
inadvertently matched by multiple lines (rules).  As such, the working 
set is specifically modified so that other lines don't match if they 
should not.  There is a lot of pattern manipulation to keep track of and 
it takes practice.


I'm sure that there are others.  But those are the big ones that come to 
mind at the moment.


At it's peak back in the early 90's there were approximately five 
people in the world who actually understood sendmail, and none of 
them ever worked where you did.


I don't know about the '90s, but I do know that in the '00s and '10s, 
your statement is exaggeration to the point of being hyperbole.


I have witnessed an active Sendmail support community for about 15 of 
the last 20 years.  Most of that support was via the comp.mail.sendmail 
newsgroup.


The rest of us stumbled in the dark using the finely honed 
cargo-cult practices cutting and pasting random snippets out of 
example configurations to see what happened.


Your lack of use of resources doesn't mean that said resources wasn't 
available.


Usually what happed is that mail was lost or flew around in a loop 
multiplying to the point where a disk parition filled up.


Yep.

That said, sendmail has features that no other MTA has.  For 
example, it can transfer mail using all sorts of different protocols 
that nobody uses these days.


It's not just (on the wire) protocols that sendmail supports.  Many of 
which are effectively obsolete save for specific microcosms.  Sendmail 
also supports interfacing with other programs in a very flexible manner.


It is fairly easy to have Sendmail support Mailman (et al.) in such a 
way as that you don't need to change anything on the email server when 
adding or removing mailing lists.  No, I'm not talking about automated 
alias generation.  There is no need for alias generation when Sendmail 
and Mailman are connected properly.


Sendmail quite happily supports LMTP into local mail stores / programs. 
 This is quite handy when you want something like a recipient's sive 
filter to be able to reject a message.



Back in the 90's a number of replacement MTAs were developed such as
qmail, postfix, exim, etc.  When you installed one of these, (instead
of the classic sendmail), they would usually provide an executable
file named "sendmail" that accepted the same command line arguments
and input format that the original did.  That allowed applications who
wanted to send email to remain ignorant about exactly what MTA was
installed.


Yep.  The "sendmail" command has become a de facto industry standard 
that most MTAs emulate.




--
Grant. . . .
unix || die



Re: [gentoo-user] Re: mail cannot send emails (trying to use it with smartd)

2020-04-03 Thread Grant Taylor

On 4/2/20 6:26 PM, Caveman Al Toraboran wrote:

though i'm a bit curious about sendmail (if your time allows).


Feel free to ask questions about sendmail.  I'll do my best to answer.


do you mean the ebuild "sendmail"? or the command "sendmail"?


In this context, ebuild as a reference to the MTA known as Sendmail.

i used to think it's a swiss-army kind of tool (used to call 
"sendmail" in my cgi scripts decades ago without any infrastructure; 
by just directly zapping recipient's smtp gateway).


Yes, Sendmail can e a Swiss Army knife.  That's one of it's advantages. 
 That's also one of it's disadvantages.


Your historic use of Sendmail is an example of using a local queuing 
MTA.  Your CGI scripts passed the message off to the queuing MTA and 
didn't need to worry about what to do if the remote mail server couldn't 
be reached.  You didn't have to bother with detecting the error and 
reporting it to the end user via the web form.  You didn't have to 
bother with storing information for later retry.  The local queuing MTA 
did all of that for you.




--
Grant. . . .
unix || die



Re: [gentoo-user] Re: mail cannot send emails (trying to use it with smartd)

2020-04-03 Thread Grant Taylor

On 4/2/20 8:18 AM, Grant Edwards wrote:
Then DO NOT use sendmail.  Sendmail is only for the 
ultra-professional who already knows how to configure it (not 
joking).


I take exception to that for multiple reasons:

1)  Bootstrapping - you can't learn something without actually using it.
2)  I've been quite happily using Sendmail on multiple platforms for 20 
years.
3)  Sendmail is capable of working in every single email scenario that 
I've seen in said 20 years.  The same can't be said for other MTAs.


If all your mail gets sent via a single SMTP server at your ISP (or 
wherever), then Sendmail is definitely not what you want.


That depends.

If you have a fleet of Sendmail servers, chances are good that you will 
prefer to re-use the same solution, even in small / simple role.  Read: 
 The Devil that you know.


If you don't need local queueing (so you can send email while 
offline), then I'd pick ssmtp.


(20)ProTip:  You really do want local outbound queueing /somewhere/ on box.

You don't want your web application to error out when it can't reach 
it's SMTP server.  You don't want t loose that receipt for the 
transaction that the customer just made.  Can you regenerate the 
receipt?  ;-)


You can have each application do it's own queuing / re-sending, r you 
can rely on the local MTA to do it for you.


Where do you want the queuing complexity?

A local queuing MTA is simple and solves a LOT of problems.

If you want something even more sophisticated (e.g. something that 
can deliver mail locally and receive inbound mail using SMTP), then 
postfix or exim would probably the be the next step up:


I would add Sendmail to the front of that list.  But I might be biased.

I've read claims that there are things you can do with sendmail that 
Exim or Postfix can't handle, but I'm not sure I believe it.  I am 
sure I'll never need to do any of those things.

I don't know Exim or Posfix well enough to comment with any authority.

I do know that I Sendmail, Postifx, and Exim all handle (E)SMTP without 
any problem.


I think that Postfix can be made to handle UUCP.  Sendmail has four 
different ways that it can use UUCP, built in.  I have no idea about Exim.


Sendmail can easily work with other protocols, Mail11, fax, pager, news 
gateway (send and / or receive).  It's also easy to add additional 
protocols without needing to recompile anything, only configuration 
changes are needed.


I don't know where in the list I lost Postfix and / or Exim, but I 
expect that they didn't make it through the last paragraph.


For a long time, Sendmail did have one claim to fame that no other MTA 
had.  Specifically Sendmail had the ability to use milters (mail 
filters) and filter email during the SMTP transaction.  It's trivial to 
hook ClamAV, SpamAssassin, and just about anything you want into 
checking mail during the SMTP transaction such that you have the ability 
to reject, not bounce, the message.  Thus making the sending host be 
responsible for it.


I'm sure there are many more and far more esoteric things that Sendmail 
can do.  Though I doubt that many of them are as germane today as they 
were in the mid '90s.  I was recently playing with the ability to have a 
domain spread across multiple servers and configuring Sendmail to route 
messages to the proper back end server, a feature known as LDAP routing.


Yes, Sendmail has a lot of power, much like unix.  It will happily hand 
you a loaded gun, encourage you to point it at your feet and empty the 
magazine as fast as possible.  When you're done, it will help you reload 
and do it again.


If you know how to wield this power Sendmail can be a wonderful tool 
that can be used in all of the scenarios described in this thread.  It's 
also relatively trivial to have Sendmail be a basic queuing outbound 
only MTA that uses ISP smart hosts to provide SMTP services to local 
applications.


But I really object to the "ultra-professional" comment, because 
everybody has to start somewhere.




--
Grant. . . .
unix || die



smime.p7s
Description: S/MIME Cryptographic Signature


Re: [gentoo-user] OT: looking for email provider

2020-02-08 Thread Grant Taylor

On 2/6/20 8:56 PM, John Covici wrote:
I do run my own mail server for years, but I would like to know how to 
run those "hygene features".  I do have spf, but that is about it -- 
maybe this should be another thread, but I want to keep doing this 
and be sure of having my mail delivered to where its going which 
sometimes gmail gives me problems.


I don't know if the gentoo-user mailing list is the is the best location 
to have this discussion.  If you think it is, start a new thread and 
I'll reply with more information about what I'm doing.


Or, feel free to email me directly and I'll share the information off-list.



--
Grant . . .
unix || die



Re: [gentoo-user] OT: looking for email provider

2020-02-06 Thread Grant Taylor

On 2/6/20 3:36 PM, Laurence Perkins wrote:

Sure you can set up just a simple email server


Having run a personal email server for 20 years, including all 
contemporary hygiene measures, I don't think "simple" and "email server" 
go together any more.


I can rattle off most of what I'm doing in short order.  But when doing 
so takes 5+ minutes, I think we're beyond the realm of "simple".




--
Grant . . .
unix || die



Re: [gentoo-user] External hard drive and idle activity

2020-01-01 Thread Grant Taylor

On 1/1/20 5:09 PM, Dale wrote:

Howdy,


Hi,

As some may recall, I have a 8TB external SATA hard drive that I do 
back ups on.  Usually, I back up once a day, more often if needed. 
Usually I turn the power on, mount it, do the back ups, unmount and 
turn the power back off.  Usually it is powered up for 5 minutes or so. 
When I unmount it tho, I sometimes notice it is still doing something. 
I can feel the mechanism for the heads moving.  It has a slight 
vibration to it.  Questions are, what is it doing and should I let it 
finish before powering it off?  I'd assume that once it in unmounted, 
the copy process is done so the files are safe.  I guess it is doing 
some sort of internal checks or something but I'm not sure.


There might be some activity for up to 30 seconds after umount finishes 
and returns to the command prompt.


Note:  umount will normally block until buffers are flushed to disk.


Is it safe to turn it off even tho it is doing whatever it is doing?


I wouldn't.


Should I wait?


I would.


Does it matter?


Maybe.

Is the drive SATA connected or USB connected to the machine?

In some ways it doesn't matter.  You can tell the kernel to eject the 
drive.  Once that finishes, there is no active remnants of the drive in 
kernel.  5–15 seconds after that and you should be quite safe to power 
the drive off.


echo 1 > /sys/class/block/$DEVICENAME/device/delete

That will cause the kernel to gracefully disconnect the drive.


Thanks.


:-)



--
Grant. . . .
unix || die





--
Grant. . . .
unix || die



Re: [gentoo-user] Frontier ADSL modem and IP address

2019-12-30 Thread Grant Taylor

On 12/30/19 1:04 PM, Dale wrote:

Is there a way to find the IP for this thing?


Try running a network sniffer as you reboot it.

Most pieces of network equipment will send out some sort of broadcast 
requests that will give some hint as to how they are configured.  At 
least what subnet they are in.




--
Grant. . . .
unix || die





--
Grant. . . .
unix || die



Re: [gentoo-user] qemu / nbd

2019-12-05 Thread Grant Taylor

On 12/5/19 12:22 PM, n952162 wrote:
But, since it's included in the package, and apparently (from the name) 
will use a NBD device, then I think the dependency is logical


I disagree.

QEMU itself does not use NBD.  Thus QEMU does not need to depend on 
qemu-nbd.  QEMU uses files on mounted file systems.


qemu-nbd is a — perhaps questionably named — utility that allows QEMU 
disk images / files to be accessed as if they were NBDs.  Not the other 
way around.  qemu-nbd does not allow QEMU to use NBDs.


As I understand it, nothing about /QEMU/ is dependent on any NBD support.



--
Grant. . . .
unix || die



Re: [gentoo-user] qemu / nbd

2019-12-05 Thread Grant Taylor

On 12/5/19 12:50 PM, Neil Bothwick wrote:
No, but since it is provided by the ebuild, the ebuild should check 
that the target system is capable of supporting it. The qemu ebuild 
already spits out warnings about missing kernel options, not all of them 
essential, so why not this one too?


I think a warning for missing NBD support is absolutely the correct way 
to go.


I think blocking / failing the ebuild is the wrong thing to do.

IMHO warning ≠ failure



--
Grant. . . .
unix || die





--
Grant. . . .
unix || die



Re: [gentoo-user] nbd ebuild incomplete? [SOLVED]

2019-12-05 Thread Grant Taylor

On 12/5/19 12:33 AM, n952162 wrote:

The emerge should have checked for this and failed.


I don't think it should fail.  I've routinely seen emerge check for 
various kernel / network / other parameters and issue warnings about 
things not being the way that the ebuild wants.  But the ebuild does 
successfully emerge.


IMHO emerge / ebuild should not refuse to do what I tell it to do just 
because it doesn't like my custom kernel.  The lack of kernel support is 
/my/ problem.  The emerge / ebuild is capable of compiling perfectly 
fine without the kernel support.  Perhaps I'm compiling on a fast 
machine that is running a different kernel and then copying the 
utilities to another system that does have the kernel support.




--
Grant. . . .
unix || die





--
Grant. . . .
unix || die



Re: [gentoo-user] qemu / nbd

2019-12-05 Thread Grant Taylor

On 12/4/19 11:03 PM, Walter Dnes wrote:
nbd is a "Network Block Device" driver along the lines of NFS, but 
it doesn't handle concurrency.  https://nbd.sourceforge.io/


I think I'd liken NBD to iSCSI more so than NFS.  Primarily because both 
NBD and iSCSI provide local block devices that are backed by something 
across the network.  Conversely, NFS provides access to regular files 
from across the network.


You can put a file system on an NBD / iSCSI block device if you want, 
but you don't have to.  Conversely, NFS is a file system and doesn't 
require putting a file system on top of it.



But it's generic, and can handle any *REGULAR* file system, not just NFS.


NFS is not a file system in the /typical/ sense.  There is no mkfs.nfs. 
NFS is a file system in the sense that it is mounted and provides access 
to files.


QCOW2, or raw, or whatever, is a special QEMU format.  So it requires 
QEMU libs (i.e. qemu-nbd) to decode QCOW2/RAW.


Why doesn't qemu have a dependency on NFS?


It doesn't, nor does it need to.

Same answer; they're both remote network block device systems that most 
linux users don't need,


NFS is not a block device.


and they're both unrelated to the core functionality of QEMU.


QEMU can use image files (qcow(2) or raw) on any mounted file system.

NFS qualifies as a mounted file system, but that is completely outside 
of QEMU.


Normal file systems can be put on NBD & iSCSI devices and mounted, but 
that is also completely outside of QEMU.


QEMU proper will not use NBD devices (directly) itself.

qemu-nbd is a utility to act as a NBD server to allow the Linux kernel 
to be an NBD client to access qcow(2) image files.


qemu-nbd is not /needed/ for normal QEMU operation.



--
Grant. . . .
unix || die





--
Grant. . . .
unix || die



Re: [gentoo-user] cannot compile sendmail in new gentoo installation

2019-11-14 Thread Grant Taylor

On 11/13/19 9:51 PM, John Covici wrote:
Hi.  I am trying to create a new installation as a chroot from my previous 
one since I don't have another box at hand and don't want to take this 
one down for several days to recompile everything.  Now I am stuck trying 
to emerge sendmail.  I cannot emerge 8.15.2 nor 8.14.9.  The last time 
I emerged successfully was under gcc 8.3.0 and so I tried emerging that 
but sendmail will not compile.  I have ACCEPT_KEYWORDS="~x86 ~amd64" .


I can't comment on the specific problem you're having.  But I can say 
that sendmail-8.15.2-r2 successfully (re)compiled (emerge -DuNe @world) 
this past weekend on a current system without any problems.




--
Grant. . . .
unix || die



Re: [gentoo-user] links that behave differently per calling app?

2019-11-12 Thread Grant Taylor

On 11/10/19 9:37 PM, Caveman Al Toraboran wrote:
hi - is it possible to have some kind of fancy links that know the 
name of the process that is trying to access it, and based on its name, 
it links it to a file?


I've not heard of that specifically.

e.g. `ln -s X Y` will create link Y that always refers to X whenever 
anyone tries to access Y.  but is it possible to have a fancier linking 
that creates a linking file that contains some kind of access list, 
that specifies things like:


- if accessing process is named P1, then direct it to X1.
- if accessing process is named P2, then direct it to X2.
- ...
- if accessing process is named Pn, then direct it to Xn.
- else, default to X0.


However, I have heard of something that might come close to what you 
what I think you're asking about.


I've never heard of support for this in Linux.  But I have heard of an 
ability that /some/ other traditional Unixes have where a named sym-link 
can point to a different file based on the contents of an environment 
variable.


Think of it as something like this:

ln -s /usr/arch/$ARCH/bin /usr/bin

Thus

/usr/bin -> /usr/arch/x86/bin   # on x86

-or-

/usr/bin -> /usr/arch/arm/bin   # on ARM

Depending on what your architecture is and thus the value of the $ARCH 
environment variable.


I /think/ this was done based on environment variables and / or 
something else equally consistent in the kernel.


I have no idea what this type of link is called.  But it is a well 
established standard in the traditional Unix space.  Though, I think 
it's seldom used.


i think if we have this, we can solve slotting in a simpler way. 
e.g. we install libs in their own non-conflicting locations, and 
then install for them such fancy sym links with access that routes 
accessing processes to the right version of the lib.


Interesting idea.

I'd need to know more about how the links are actually implemented and 
more about slots to even fathom a guess if it would work.  I'm sure it 
would do something.  I just can't say if we would like what it does or not.


I question if environment variables are what's used, mainly because of 
the potential volatility of the environment; different interactive 
shells, different ""shells in /etc/passwd (et al.) that bypass 
interactive shells, remote commands, etc.  I could envision how the 
traditional environment variable might not behave as desired.


I also have concerns about the potential security implications of an end 
user changing something in their interactive shell's environment, thus 
altering where this type of link would point to.



thoughts?


I would very much like to see this type of functionality come to Linux. 
But I gave up hoping for it a long time ago.




--
Grant. . . .
unix || die





--
Grant. . . .
unix || die



Re: [gentoo-user] Bounced messages

2019-11-03 Thread Grant Taylor

On 11/1/19 2:00 PM, Dale wrote:

I think we came to the conclusion that one person is causing this.


I don't agree with that conclusion.

Basically his emails trigger the spam alarm and it gets marked before 
or upon receipt by gmail.  It doesn't even make it to my in box.


I don't know if spam is the proper term per say, but it's certainly in 
the email hygiene category.


Now how a individual can find themselves in a place where their emails 
are marked as spam like that, one can only guess.


I don't need to guess.

Any subscriber that posts to the list from an email domain that employs 
contemporary security; i.e. SPF, and DKIM, and DMARC, all with strict 
settings, will likely cause this to happen for subscribers that have 
email with a provider that honors said strict security.



Thanks for the info.


You're welcome.

Note:  I expect this larger problem to get considerably worse (across 
mailing lists in general) before it gets better.  Some governments 
around the world are mandating that any business that partners with the 
government in any way must implement the contemporary technologies that 
I'm talking about.  Germany and the U.S.A. come to mind.  I don't know 
of other examples off hand.




--
Grant. . . .
unix || die



Re: [gentoo-user] Bounced messages

2019-11-01 Thread Grant Taylor

On 10/31/19 9:52 AM, Dale wrote:

Howdy,


Hi,


I been getting quite a few of these lately.


Some messages to you could not be delivered. If you're seeing this
message it means things are back to normal, and it's merely for your
information.

Here is the list of the bounced messages:
- 188380


I periodically get these too.

My understanding is that my receiving server is correctly rejecting 
specific messages in accordance with the purported sending domain's 
wishes (which they publish in DNS).


I believe this to be happening because the gentoo-user mailing list (and 
others) are sending messages in a way that violates contemporary spam 
filtering; e.g. SPF, DKIM, DMARC.


I don't think there is anything that we subscribers can do.  I think 
it's up to the mailing list administrators to update / reconfigure 
things to match contemporary spam filtering.




--
Grant. . . .
unix || die



Re: [gentoo-user] how did i get ~/26H1MJ8.txt?

2019-10-20 Thread Grant Taylor

On 10/20/19 3:57 AM, Caveman Al Toraboran wrote:

any idea what is this?  and how did i get it?


At quick glance, it looks like a script to upgrade GCC across otherwise 
not quite as compatible versions as possible.


Nothing in it concerns me.

Warning:  I am relying on my uncaffeinated memory, which is known to be 
forgetful at times.




--
Grant. . . .
unix || die



Re: [gentoo-user] Re: USE flag 'split-usr' is now global

2019-08-07 Thread Grant Taylor

On 8/6/19 6:31 PM, Rich Freeman wrote:
So, an initramfs doesn't actually have to do ANY of those things, 
though typically it does.


I agree that most systems don't /need/ an initramfs / initrd to do that 
for them.  IMHO /most/ systems should be able to do that for themselves.


Nothing prevents an initramfs from booting an alternate kernel 
actually, though if it does it will need its own root filesystem 
initialized one way or another (which could be an initramfs).


Agreed.

Though I think that's rare enough that I chose to ignore it for the last 
email.


Linux bootstrapping linux is basically the whole premise behind 
coreboot.


Sure.

But coreboot is not a boot loader or a typical OS.  coreboot falls into 
the "firmware" family.



Sure, but you don't need it ALL OVER YOUR FILESYSTEM.


I'd be willing to bet that 75% of what's contained in an initramfs / 
initrd is already on the root file system.  Especially if the initramfs 
/ initrd is tweaked to the local system.


Taking a quick look at the default initrd of an Ubuntu 16.04 LTS, just 
about everything I see in the following output exists on the system.


/boot/initrd.img-4.4.0-87-generic has 1908 items in it.

129 of them aren't already on the installed system.  Consisting of:

51 /scripts   Probably initrd specific
61 /bin   May be in /sbin /usr/bin /usr/sbin on the local system.
6  /conf  Probably initrd specific
5  /etc
2  /lib
4  misc

Digging further, 29 of the 61 for /bin are in /usr/bin.  12 are in 
/sbin.  That's 88 out of 1908 files in the 
/boot/initrd.img-4.4.0-87-generic file that aren't already all over your 
file system.


Some of the proposed solutions in this thread involved sticking stuff 
in /bin and so on.  An initramfs is nicely bundled up in a single file.


So, you're saving 88 files (out of 1908) and storing 1820 additional 
copies of files that exist in a container that's not easy to work with.


Why‽

Absolutely true, which means it won't interfere with the main system, 
as opposed to sticking it in /bin or somewhere where it might.


How do the files that are needed for the system to operate, being placed 
where they need to be, interfere with the main system?



Such as?


Allow me to rephrase / clarify slightly.

There have been many things that I've wanted to do in the past 20 yeras 
that the initramfs / initrd from the vendor did not support or was not 
flexible enough to do.


 · Not support root in LVM.
 · Not support root on LUKS.
 · Not support iSCSI disks.
 · Not support root on multi-path.
 · Not supporting the file system (tools).
 · Not support the RAID that (tools).
 · Not support ZFS.
 · Not support root on NFS.

These are just the some of the things that I've wanted to do over the 
years that the initramfs / initrd as provided by the distro did not support.


I have routinely found initramfs / initrd limiting.


It is a linux userspace.  It can literally do anything.


Yes, /conceptually/ it's Linux (userspace) can do anything that Linux 
can do.  /Practically/ it can only do the things that the distro 
envisioned support for and included in their initramfs / initrd 
management system.



You don't need to use dracut (et al) to have an initramfs.


(See above.)

In fact, you could create your super root filesystem that does all 
the fancy stuff you claim you can't do with an initramfs,


Sure.  I did.  (Time and time again) it was the machine's root file 
system doing exactly what I wanted it to do.


then create a cpio archive of that root filesystem, and now you have 
an initramfs which does the job.


Why would I want to copy / permute something that's already working to 
add as an additional layer, which I don't need, complicating the overall 
process‽


Sure, but only if the kernel can find that disk without any userspace 
code.


There's a reason I said "if".  The extremely large majority of the 
systems that I've administered over the last 20 years could do that.



What if that disk is on the other side of an ssh tunnel?


That would be a case where you would actually /need/ an initramfs / initrd.

I'd like to hear tell of someone actually using a root disk that is 
accessed through an ssh tunnel.  Short of that, I'm going to consider it 
a hypothetical.


If you know the commands to do something, why would you have to wait 
for somebody else to implement them?


Because I have hundreds of machines that need to be supported by junior 
admins and sticking within the confines of what the distro vendor 
supports is a good idea.


Or more simply, sticking within distro vendor's support period.

Actually, for most of these the initramfs is the starting 
root filesystem (just about all linux server installs use one).


Remember, an initramfs / initrd is (or gets extracted to) a directory 
structure.


Just about all of the servers and workstations (mid five digits worth) 
that I've administered over the years end up with a SCSI (SATA / SAS) 
disk off of a controller with the driver in 

Re: [gentoo-user] Re: USE flag 'split-usr' is now global

2019-08-06 Thread Grant Taylor

On 8/6/19 10:28 AM, Rich Freeman wrote:

An initramfs is just a userspace bootloader that runs on top of linux.


I disagree.

To me:

 · A boot loader is something that boots / executes a kernel with 
various parameters.
 · An initramfs / initrd (concept) is the micro installation that runs 
under the kernel that was booted (by the boot loader).


The initramfs / initrd micro installation does the following:

1)  fulfills prerequisites (e.g. loads requisite drivers, initializes 
networking for and discovers storage, decrypts block devices)

2)  mounts root (and some other) file systems
3)  launches additional init scripts.

None of those steps include booting / executing an alternate kernel.

Remember that the contents of an initramfs / initrd is a micro 
instillation that simply includes the minimum number of things to do 
steps 1–3 above.


The initrd is literally an image of a block device with a root file 
system on it that includes the minimum number of things to do steps 1–3 
above.


The initramfs is a CPIO archive of the minimum number of things to do 
steps 1–3 above which get extracted to a temporary location (usually a 
ram disk).


Both an initrd and an initramfs are a micro installation for the purpose 
of initializing the main installation.  They are not a boot loader.


Nobody has any problem with conventional bootloaders, and if you want 
to do anything with one of those you have to muck around in low-level 
C or assembly.


That's because a boot loader, e.g. GRUB, lilo, loadlin, do something 
different and operate at a lower level than system init scripts.


There has been talk of gathering up all the stuff you need from 
/usr to bootstap everything, and then adding some scripting to mount 
everything.  The proposals have been to shove all that in / somewhere 
(perhaps even in /bin or /sbin).  If instead you just stick it all in 
a cpio archive you basically have an initramfs,


Not basically.  That /is/ what an initramfs / initrd contains.


and you don't need to have cruft all over your filesystem to do it.


Actually yes you do need that cruft.

Let's back up and talk about what that cruft is.

It's the minimum libraries and binaries required to:

1)  Requisite libraries
 - C library
 - other similar / minimal / unavoidable libraries
2)  Requisite binaries
 - fsck
 - mount
 - networking utilites (for iSCSI / NFS / etc.)
 - other similar / minimal / unavoidable binaries
3)  Scripts to bring the rest of the system up
 - minimal init scripts to go from a post-kernel-execution (what 
was traditionally /sbin/init) to launching second stage init scripts


To me, all of these things are going to exist on the main installation 
/anyway/.  There is going to be minimal, if any, difference between the 
version put in the initramfs / initrd and what's in the main / (root).


So, to me, what you're calling "cruft" is core system files that are 
going to exist anyway.


If anything, having an initramfs / initrd means that you're going to 
have an additional copy of said core system files in a slightly 
different format (initramfs / initrd) that's not usable by the main system.


There are already configurable and modular solutions like dracut which 
do a really nice job of building one,


Yes.  We've had many different tools in the last 28 years of Linux and 
longer for Unix for making the management of the boot process simpler.


It comes down to loading the kernel from something and starting the 
kernel with proper parameters (that's the boot loader's job) and 
starting something (traditionally /sbin/init) from some storage somewhere.



and they make your bootstrapping process infinitely flexible.


Nope.  I don't agree.

There have been many things that I've wanted to do in the past 20 years 
that initramfs / initrd aren't flexible enough to do.


But adding an init script that calls tools on the root file system is 
infinitely more flexible.  I'm not restricted to doing things that 
dracut (et al.) know how to do.


If I can boot the kernel, point it at a root disk, and launch an init 
system, I can do anything I want with the system.


Let me say it this way.  If I can put together commands to do something, 
I can get thee system to do it on boot.  I don't have to wait for dracut 
to be updated to support the next wiz-bang feature.


If you want root to be a zip file hosted on a webserver somewhere 
that isn't a problem.  If you want root to be a rar file on a 
gpg-encrypted attachment to a message stored on some IMAP server, 
you could do that too.  To make all that work you can just script it 
in bash using regular userspace tools like curl or fetchmail, without 
any need to go mucking around with lower-level code.  Then once your 
root filesystem is mounted all that bootstrapping code just deletes 
itself and the system operates as if it was never there (strictly 
speaking this isn't required but this is usually how it is done).


You need /something/ to be 

Re: [gentoo-user] Re: USE flag 'split-usr' is now global

2019-08-06 Thread Grant Taylor

On 8/6/19 9:54 AM, Canek Peláez Valdés wrote:
If it's computable it can be done, of course. Therefore it can be done, 
currently. I don't think nobody has said it absolutely cannot be done.


>.<

So it sounds like it's a question of /how/ compatible / possible it is.

It seems as if there is enough incompatibility / problems that multiple 
people are comfortable saying that it can't be done on some level.



The thing is:

1. How much work implies to get it done.
2. Who is gonna do said work.

The answer to 1 is "a lot", since (as someone mentioned in the thread) 
it involves changing not only the init (nevermind systemd; *ALL* init 
systems), but all applications that may require to use binaries in /usr 
before it's mounted.


The answer to 2 is, effectively, "nobody", since it requires a big 
coordinated effort, stepping into the toes of several projects, 
significantly augmenting their code complexity for a corner case[1] that 
can be trivially be solved with an initramfs, which it just works.


I don't currently feel like I can give a proper response to this.

1)  I don't have the time to lab this and try things.
2)  I don't want to further hijack this thread and start discussing what 
precisely is and is not broken.
3)  I question — from a place of ignorance — just how much effort there 
is for #1.


Arguing against this trivial (and IMHO, elegant) solution is tilting at 
windmills. Specially if it is for ideological reasons instead of 
technical ones.


Please clarify what "this trivial solution" is.  Are you referring to 
initramfs / initrd or the 'split-user' USE flag?




--
Grant. . . .
unix || die



Re: [gentoo-user] Re: HACK: Boot without an initramfs / initrd while maintaining a separate /usr file system.

2019-08-05 Thread Grant Taylor

On 8/5/19 8:45 PM, Grant Taylor wrote:

Even bigger hack.


I wouldn't be me if I didn't lob these two words out there:

mount namespaces

/me will see himself out now.



--
Grant. . . .
unix || die



Re: [gentoo-user] Re: HACK: Boot without an initramfs / initrd while maintaining a separate /usr file system.

2019-08-05 Thread Grant Taylor

On 8/5/19 6:28 PM, Jack wrote:
However, I keep wondering if an overlay file system might not be of 
some use here.  Start with /bin, containing only what's necessary to 
boot before /usr is available.


I wonder how much of what would need to be in the pre-/usr /bin 
directory can be provided by busybox.  (Assuming that busybox is 
compiled with everything living in / (root).


Once /usr is mounted, overlay mount /usr/bin on /bin (or would it be 
the other way around?)


An overlay mount (mount -o bind /usr/bin /bin) would be an additional 
mount.  Which in and of itself is not a bad thing.  But the sym-link 
from /bin -> /usr/bin would avoid the additional mount.  Admittedly, you 
might need one additional (bind) mount somewhere to be able to access 
the underlay while /usr is mounted.


Unless

...

Even bigger hack.

What if the underlay (/ (root)) file system had the following structure:

/bin -> /usr/bin
/usr/bin -> /.bin

That would mean that the pre-/usr /bin contents would still be 
accessible via /.bin even after /usr is mounted.  And /bin would still 
point to /usr/bin as currently being discussed with /usr merge.




--
Grant. . . .
unix || die



Re: [gentoo-user] Re: USE flag 'split-usr' is now global

2019-08-05 Thread Grant Taylor

On 8/5/19 5:34 PM, Mick wrote:
I am not entertaining ad hominem attacks on whoever may have been 
involved in such decisions.  Only the impacts of such decisions on 
gentoo in particular.


:-)

I probably used an incorrect figure of speech and caused confusion. 
We're only discussing the merge of /bin and /sbin to /usr/bin and 
/usr/sbin (it seems to be more nuanced than this though - see gentoo 
forums thread further down).


I started to read to be able to be informed when drafting my reply. 
Emphasis on "started".  The first comment to the quote makes me think 
that it's going to be a lively discussion.  I'll read it later as time 
permits and respond accordingly.


However, the takeover of Linux in general by systemd architectural 
changes is a fact.  It is also a fact gentoo has been changed a lot 
to accommodate systemd.  I have set USE="-systemd" but still end 
up with service unit files on my system, unless I take additional 
steps to remove/mask them.  At some point udev/dbus/what-ever will be 
irrevocably linked to systemd, just because its devs do not care for 
alternative architectures.  Some packages require systemd components, 
like (e)logind, and so on.  Some years ago eudev was forked from 
systemd with a stated aim at the time to also restore the borked 
separate /usr without an initrd - did this ever happen?  There is 
a direction of travel and it has been influenced heavily by systemd 
design decisions.


ACK

I think I could draw analogies with XFree86 vs Xorg vs Wayland.  In the 
beginning, nobody wants to actively stop development of the old method. 
But in the end, nobody wants to devote any effort to keep bring the old 
method forward.  Thus the old method gets left behind.


I'm not saying it's correct, or not, just that it is the nature of things.

There isn't any - I haven't seen it either.  Sorry if I confused 
the point.


Actually, the quote in the first forum post you linked to has the following:

/sbin->usr/bin
/usr/sbin->bin

That takes four directories (/bin, /sbin, /usr/bin, /usr/sbin) and 
combines them into two (/sbin & /usr/bin and /bin & /usr/sbin) but it 
also crosses bin & sbin as well as going opposite directions with the 
links.  —  Yuck!!!


Yes, same here, but this is primarily because since gentoo's change in 
this area I moved away from using a separate /usr fs to avoid having 
to use an initrd.


ACK

I have given one benefit of keeping a separate fs for /usr, mounting 
it read only for daily use.  Or you could have a shared NFS partition 
across various client PCs, facilitating system duplication.  You could 
also have /usr on a faster disk for performance reasons.


ACK

A benefit of /var being separate (or wherever www/, logs/, mail/ 
and databases are stored) is so that if it runs out of space the 
remaining system is not brought to its knees.


ACK


Ditto for /home, with the addition of encrypting user's data/partition 
and mounting it with nosetuid (to prevent the users from running 
their own secret root shell).


ACK

So at least two reasons, helping to manage (simply) access rights and 
space are valid enough reasons IMHO to not remove a separate /usr 
option from gentoo, but I'm interested to hear what others think.


Agreed..

One might argue you still retain the option of using a separate /usr, 
but with the new added restriction of being obliged to engage in 
boot time gymnastics with an initrd, LVM, your mount-bind solution 
and whatever else.


I don't have any current first hand experience with /usr being a 
separate file system without using an initramfs / initrd.  So I'm going 
to have to take what you, and others, say on faith that it can't 
/currently/ be done.  But I've got to say, that I find that idea 
disturbing and highly suspicious.


I'd be curious for pointers to bugs about this if you have them handy. 
Please don't look, I'll dig later if I'm sufficiently motivated.


However, workarounds which add complication (remove simplicity and 
flexibility) to fix something after breaking it, is what all this 
argument is about.  Such changes remove choice.


Ya.  It's sort of like painting yourself into a corner because one (or 
more) too many bad decisions were made in the past.  I'd much rather 
admit that a bad decision was made, undo it, and move forward again with 
new knowledge.  Sadly, IMHO not enough people do that.



I'll try not to mess up the thread.  :-)


:-D

I'll try as well.  But I'm betting that we're both human.

Please do, the systemd merge refers merging /bin to /usr/bin and 
/sbin to / usr/sbin.


https://fedoraproject.org/wiki/Features/UsrMove

However, this gentoo thread mentions further merging, which made my 
head spin:


https://forums.gentoo.org/viewtopic-p-7902764.html


Ya.  I need to read that thread in detail.

The following bit concerns me.  I do hope that it's a typo.

/sbin->usr/bin
/usr/sbin->bin

You're probably correct.  In any case, the initial move of 
subdirectories of the / fs to different 

Re: [gentoo-user] Re: HACK: Boot without an initramfs / initrd while maintaining a separate /usr file system.

2019-08-05 Thread Grant Taylor

On 8/5/19 5:52 PM, Ian Zimmerman wrote:
Don't you have to go through some extra hoops (a flag to the mount 
command or something) to mount over a non-empty directory?


Nope.

I don't recall ever needing to do anything like that in Linux.

I do know that other traditional Unixes are more picky about it.  AIX 
will refuse to use a populated directory as a mount point.


As I type this, perhaps ZFS on Linux complains, but I don't recall.



--
Grant. . . .
unix || die



Re: [gentoo-user] HACK: Boot without an initramfs / initrd while maintaining a separate /usr file system.

2019-08-05 Thread Grant Taylor

On 8/5/19 5:45 AM, Mick wrote:

Interesting concept, thanks for sharing.


You're welcome.

Unless I misunderstand how this will work, it will create duplication 
of the fs for /bin and /sbin, which will both use extra space and 
require managing.


Yes, it will create some duplication.  Though I don't think that /all/ 
of the contents of /bin and /sbin would need to be duplicated.  Think 
about the minimum viable binaries that are needed.


Perhaps something like busybox would even suffice.


Will you mount -bind the underlying fs in fstab?


You could.  I have done so in some VMs that I've tested various things.

My use case on my VPS is for an encrypted data partition.  So I have 
things like the following:


/home -> /var/LUKS/home
/etc/mail -> /var/LUKS/etc/mail
/etc/bind -> /var/LUKS/etc/bind

/var/LUKS/home/gtaylor does have an absolute minimum directory structure 
so that I can ssh in with my key and run a script to unlock / open and 
mount the LUKS volume and start some services (mostly email and DNS 
related).


How will you make sure installations of the same binaries are 
installed/copied in both underlying and mounted /usr/* fs and kept 
in sync?  By changing all affected ebuilds?


I don't have an answer to this qustion.  I've not needed an answer to 
this question.


I think I would likely create a script that would copy specific files 
from the /usr path to the underlying /(usr) path as needed.


I doubt there would be many files.

I don't see any need to alter an untold number of ebuilds for a system 
architecture / file system decision.


It is a hack alright, to restore the previous default /usr 
functionality, so a useful option to consider.


That's why I shared it.

It's also an example of an idea that works for my use case that you are 
free to take and modify for your use case.  I don't need to know about 
your use case, much less have an answer for it, when I'm sharing my use 
case.  (Harking back to the different types of communities in the 
previous email.)


If I were to be asked my preference would be to revert the systemd 
inspired changes which caused this loss of functionality.  ;-)


Fair enough.

Though I would question just how much and what is broken by having a 
separate /usr file system without systemd.  }:-)  Specifically, is it 
truly broken?  Or does it need some minor tweaks?




--
Grant. . . .
unix || die



Re: [gentoo-user] Re: USE flag 'split-usr' is now global

2019-08-05 Thread Grant Taylor

On 8/5/19 4:49 AM, Mick wrote:

It is being /assertively/ promoted persistently by the same devs.


Okay.

Just because it's the same developers promoting both does not mean that 
any logic / evidence they might provide in support of /usr merge is 
inherently wrong.  We should judge the merits of their logic / evidence 
on it's own, independent of what we think of their other work.  (Read: 
Don't turn this technical conversation into a character discussion.)



Sure, but for how much longer?


/me looks around for something that he must have missed.

I didn't see anything about combining bin and sbin.

Let's focus on the /usr merger discussion /first/.  Then we can address 
bin vs sbin.


You need to check the direction of travel here and how long before a 
particular dev priesthood which imposes initrd, systemd and a single 
partition image, or whatever suits their mass market use cases agenda, 
foists their choice upon *all* users.


Again, focus on technical, this is not a character discussion.

I also don't think they will be successful in foisting their ideas on 
*all* users.  I'm quite confident that there will always be some random 
distro that does not subscribe to them.  Maybe the usable percentage 
will be quite small.  But any distro that doesn't tow the line means 
that not /all/ distros follow the agenda.  (Yes, I'm saying distro 
instead of user, but I think you understand either way.)


I personally don't like the idea of /{,s}bin being a sym-link to 
/usr/\1.  I like the idea that / (root) and /usr are (or can be) 
separate file systems.  That being said, I've not actually /needed/ them 
to be separate file systems in quite a while.  I may have /wanted/ them 
to be separate so that I could utilize different file system types for / 
(root) and /usr.


Looking back at all the systems that I've administered for the last ~20 
years, almost all of them did use a single file system for / (root) and 
/usr.  Sure, they likely had other file systems as well; /var, /tmp, and 
/home most likely.


Given how Unix / Linux is changing—evolving—where and how it's being 
used, I can see how there is no longer any need for the separate file 
systems.  I think this lack of need combined with the additional 
complexity is evolving into a desire for a single file system for / 
(root) and /usr.  Since I can't point to any specific use case—in about 
90 seconds—that a single file system would break, I'm pausing and thinking.


So, since we're discussing it, I invite you, and anyone else, to list 
pros and / or cons for either methodology.  I'm quite interested in 
learning what others think.


I will warn you that I'll likely respond with counter points and / or 
workarounds.  I will also expect that you will respond in kind to my 
response.


10 point
20 counterpoint
30 counter counterpoint
40 goto 10


I think following the lib directories merge, the discussion is now about
merging:

  /bin -> /usr/bin
  /sbin -> /usr/bin
  /usr/sbin -> /usr/bin


Let's fully qualify those directories.  ;-)

I've not seen /any/ discussion about merging bin and sbin.  Perhaps I've 
just missed it.  I'm going to ignore the merging of bin and sbin for the 
time being.



Since you asked this is my understanding, which may need correction by more
learned contributors, because some of this has happened well before I sat in
front of a keyboard.  Back in late 60s, early 70s, disks became larger as UNIX
was getting bigger.


ACK

This initially led to /bin and /sbin split across different physical 
devices and soon the same happened for /home, et al.


That's different than the history that I've heard.

Originally, everything was on one single disk.

Then a second disk was added and /usr was created.  User home 
directories were moved to /usr (hence it's name), and many of the same 
directory structures were replicated from / (root) to /usr.  (Likewise 
with /usr/local on other Unixes / Linuxes later.)


Then a third disk was added and /home was created.  User home 
directories were moved to /home.


So the /bin vs /sbin split that you're referring to doesn't jive with 
the history that I've seen / read / heard time and time again.


My understanding is that /sbin was originally intended for binaries 
needed to bring the system up.  (I view the "s" in "sbin" as meaning 
"system (binaries)".)


Similarly, my understanding is that /bin was originally intended for 
general use binaries that were used by more people.


Note:  The same understanding applies to the directories wherever they 
are located, / (root), /usr, and /usr/local.


But, I am ~> we are, ignoring bin vs sbin for much of this message and 
focusing on /usr merge.  ;-)



This historical fact of UNIX evolution to use multiple and subsequently larger
storage devices is being conflated with the purpose of these directories, what
they were created for back then and what their use should be today.


That sounds good.  But please put some details behind it.  What do you 

[gentoo-user] HACK: Boot without an initramfs / initrd while maintaining a separate /usr file system.

2019-08-04 Thread Grant Taylor

On 8/4/19 7:26 PM, Grant Taylor wrote:
I am also using a bit of a hack that I think could be (re)used to allow 
/usr being a separate file system without /requiring/ an initramfs / 
initrd.  (I'll reply in another email with details to avoid polluting 
this thread.)


I think that a variation of a technique I'm using for LUKS encrypted 
/home on a VPS could be used to allow booting without an initramfs / 
initrd while maintaining a separate /usr file system.


The problem is that /bin & /sbin would be symbolic links to /usr/bin & 
/usr/sbin.  So, any commands that would be needed to mount the /usr file 
system would need to be directly accessible in /bin & /sbin paths, or 
indirectly accessible in /usr/bin & /usr/sbin.


IMHO this is a whopper of a hack.

Create the bin and sbin directories inside of the /usr directory that is 
the mount point so that they are on the underlying file system that /usr 
is mounted over top of.  Then copy the needed binaries to the /usr/bin & 
/usr/sbin directories on the underlying file system.  That way, 
/sbin/fsck -> /usr/sbin/fsck still exists even before the real /usr is 
mounted.


I did say this is a whopper of a hack.

It's trivial to access these directories even when the normal / full 
/usr is mounted.


1)  mkdir /mnt/root-underlay
2)  mount -o bind / /mnt/root-underlay
3)  ls /mnt/root-underlay/bin /mnt/root-underlay/sbin

This "technique" / "trick" / "hack" works because the /bin & /sbin 
""directories are sym-links to the /usr/bin & /usr/sbin directories. 
There is nothing that means that the contents of (/usr)/(s)bin can't 
change from one command invocation to another.  The /(s)bin sym-links 
just need to point to a valid directory.  They can easily be on the 
root-underlay file system that /usr gets mounted on top of.




--
Grant. . . .
unix || die



Re: [gentoo-user] Re: USE flag 'split-usr' is now global

2019-08-04 Thread Grant Taylor

On 8/4/19 12:03 PM, Mick wrote:
I don't know more about this, but it seems we are being dragged towards 
a systemd inspired future, whether the majority of the gentoo community 
of users want it or not.


How is the /usr merger /directly/ related to systemd?

In my view system binaries should not be thrown in the same pot as user 
binaries and keeping the two separate makes good sense for those of us 
who do not spin up 200 cloned VMs a second on a RHL corporate farm.


What are you using to differentiate system binaries and user binaries? 
Are you using the /usr directory?  Or the bin vs sbin directories?


Please elaborate on your working understanding.  I ask because I want 
correctly understand you and speak to what you're talking about. 
Especially considering that there will still be the bin vs sbin directories.


I'm not arguing against systemd, or merging all directories under an 
equivalent of a $WINDOWS/ path, but it seems to me a gentoo system 
architecture should retain the freedom of choice and flexibility it 
has been famous for.


I agree that the user choice is *EXTREMELY* *IMPORTANT*!

Retrograde steps like being forced to use an initramfs just for 
retaining a separate /usr partition, should not be the way gentoo 
evolves.


Agreed.

I am curious why /you/ want (the ability to have) a separate /usr file 
system.  I know that I want to retain the ability.  That being said, 
I've not needed it in quite a while.


I am also using a bit of a hack that I think could be (re)used to allow 
/usr being a separate file system without /requiring/ an initramfs / 
initrd.  (I'll reply in another email with details to avoid polluting 
this thread.)


Setting up a USE flag to accommodate such changes would be more 
agreeable for many gentoo users, rather than changing the default 
set up.


Please forgive my ignorance.  What was the default before 'split-user' 
was made global?  I assume that 'split-user' wasn't a default.  So, by 
my limited understanding, 1) it was / still is a USE flag and 2) has 
chosen the more historically compatible as the new default.


NOTE: Please do not start a flamewar, I'm just expressing my opinion 
as a long term gentoo user who prefers to use gentoo for personal 
computing, instead of other binary systemd based distros.


I'm not taking this as a flame.  I'm taking it as an honest and open 
discussion to learn what others are doing / thinking.


For the record, I'm largely okay with /bin being a sym-link to /usr/bin. 
 However I do want /sbin to remain local to the root file system.  I've 
supported multiple installs where /usr was a separate file system and 
needed the minimal system (not an initramfs nor an initrd) to fix things 
at times.  I'm also quite happy without an initramfs / initrd.




--
Grant. . . .
unix || die



Re: [gentoo-user] Re: How do I get rid of colors (console and xterm)?

2019-07-08 Thread Grant Taylor

On 7/8/19 2:18 AM, Christian Groessler wrote:
Ideally for everything inside an xterm or console screen. I'm going to 
try "-cm" for xterm. Thanks David (in a previous post) for the suggestion.


If the -cm command line option does what you want, you can easily add 
the following to the ~/.Xdefaults file, reread it, and be good to go.


xterm.vt100*colorMode: false

Reread the file via "xrdb ~/.Xdefaults" or logout & back in.



--
Grant. . . .
unix || die



Re: [gentoo-user] Human configurable boot loader, OR useful grub2 documentation

2019-07-05 Thread Grant Taylor

On 7/5/19 1:57 PM, Neil Bothwick wrote:
In the case of GRUB2 that is unlikely to be the case, as it is meant 
to do everything. That's why the auto-generated config files are so 
long and full of conditionals. On a system you have full control over, 
you can remove all the conditionals.


I had grub-mkconfig puke on me recently.  I've not spent time diagnosing 
why.


/boot was a local disk (/dev/sda1) per Gentoo install documents.

/ (root) was special in that it was /dev/nfs.

Grub (grub-mkconfig) tossed it's salad, saying:

/usr/sbin/grub-probe: error: failed to get canonical path of 
'192.0.2.1:/export/hostname/root'


With a return code of 1.

GRUB2 is incredibly bendy, if only the documentation were as compliant 
to the wishes of its users,


Sometimes I wonder just how bendy it really is.



--
Grant. . . .
unix || die



Re: [gentoo-user] Fdisk reports new HD as 10MiB

2019-07-05 Thread Grant Taylor

On 7/5/19 8:40 AM, Vladimir Romanov wrote:
May be your motherboard or BIOS just too old, and it doesn't support 
such hard disks?


I remember a time when Linux would support large (multi-GB) drives when 
the BIOS would not support them.


Linux could bypass the BIOS and talk directly to the drives and utilize 
the drive's full capacity.


The idea that Linux can no longer do this with larger drives disheartens me.



--
Grant. . . .
unix || die



Re: [gentoo-user] Human configurable boot loader, OR useful grub2 documentation

2019-07-05 Thread Grant Taylor

On 7/5/19 8:04 AM, Rich Freeman wrote:
it probably is worth taking the time to see if you can bend to the tool 
rather than making the tool bend to you.


At face value, this is antithetical to how computers should work.

Computers should do our bidding, NOT the other way around.

That being said, sometimes it's the case that if you're having to bend 
to tool too much, you might be trying to do something the tool is not 
meant to do, and as such you should re-think what you're trying to do.


After that brief sanity check, by all means, bend the tool to your 
liking.  }:-)




--
Grant. . . .
unix || die



Re: [gentoo-user] How do I get rid of colors (console and xterm)?

2019-07-04 Thread Grant Taylor

On 7/4/19 1:10 PM, Christian Groessler wrote:

Hi,


Hi,

I'm new here. My question is how do I get rid of colors in "emerge", 
"man" and other command line programs. I managed to do in the shell 
(bash), but I'm somehow lost how to change it elsewhere. In "vi" I know 
of "syn off".


Try changing your terminal type (TERM environment variable) to something 
that doesn't support color.  I'd suggest VT100.


Try the following in a terminal session and see if that helps.

   export TERM=vt100

See attached a pic of an xterm window, which I cannot read easily. (I'm 
color-blind, so this might enhance the problem.)


You can also redefine the colors in XTerm so that anything that thinks 
it's using a given color number is actually using whatever RGB value you 
set.  Thus you can alter the colors to whatever you want.




--
Grant. . . .
unix || die



Re: [gentoo-user] why does Udisks require Lvm2 ?

2019-06-24 Thread Grant Taylor

On 6/24/19 12:12 PM, Mick wrote:

LVM-RAID uses the kernel's mdraid,


Yep.

You can get device mapper command(s) to show the internal / under the 
hood MD devices.


I feel like what LVM does to mirror (RAID 1) devices is complex.  You 
end up with non-obvious LVs that are then raided together to create 
another virtual block device that is what you see as the LV.


There are options about where and how metadata is mirrored.  Some of 
which is stored in the VG and others is stored in another small hidden 
LV specifically for this purpose.  Which itself can be configured to 
have multiple copies.


At least that's how I remember things from about five years ago.


but with less tools to manage the RAID configuration than mdadm offers:


That's because the standard LVM tools make calls to the kernel to manage 
things.  I never /needed/ anything other than the LVM tools to 
administer LVM RAID.  But I think that you can find the expected things 
under device mapper et al. if you know where to go look.


When I say "hidden", it seems as if the traditional LVM tools simply 
don't expose LVs with specific naming patterns unless you go looking for 
them.  Much like dot files are ""hidden by default, but there if you 
know where and how to look.




--
Grant. . . .
unix || die



Re: [gentoo-user] why does Udisks require Lvm2 ?

2019-06-24 Thread Grant Taylor

On 6/24/19 11:47 AM, Neil Bothwick wrote:
Of course it is, a RAID1 device is just a block device on which you can 
put any filesystem you like. RAID and LVM are complementary technologies 
that work well together, but neither needs the others (apart from the 
device-mapper bit).


Eh.  LVM can require RAID (multiple devices) without actually using MD 
outside of LVM.


LVM can do RAID inside of LVM (I think this is fairly atypically done). 
But it does mean that you can turn physical disks (or better partitions) 
into Physical Volumes for LVM and then create different Logical Volumes 
with different RAID properties.


I once had a LV w/ RAID 0 striping across multiple PVs with another LV 
with RAID 5 for redundancy, in the same PVs.


This LVM functionality does require RAID (multiple device) support as 
that's what's used /inside/ (read: under the hood) of LVM.




--
Grant. . . .
unix || die



Re: [gentoo-user] why does Udisks require Lvm2 ?

2019-06-24 Thread Grant Taylor

On 6/24/19 2:40 AM, Peter Humphrey wrote:
Yes, I've done the same on two boxes that have no need of lvm. It does 
seem wasteful though.


Probably.

I dislike the fact that other things that need device mapper have to 
drag LVM along, or apply (what I call) a device-mapper-only /hack/.


I feel like device-mapper should be it's own package that other things 
depend on; LVM, RAID (mdadm, et al.), multi-path, LUKS (cryptsetup).


I forget the detail now, but a recent-ish version of sys-fs/cryptsetup 
found it needed a hard dependency on some of the code in lvm2.


Did you apply (what I call) the device-mapper-only /hack/.  Or was LVM 
pulled in for device-mapper?


It seems to me that we have here an opportunity for redesign of certain 
packages. ("We" the community, that is.)


Agreed.


On this box, which does need lvm for RAID-1 on two SSDs:


Do you /need/ LVM?  Or is it extra that comes with device-mapper?



--
Grant. . . .
unix || die



Re: [gentoo-user] why does Udisks require Lvm2 ?

2019-06-22 Thread Grant Taylor

On 6/22/19 3:55 PM, Neil Bothwick wrote:

The indentation shows that is is a hard dependency of cryptsetup, which
is backed up by reading the ebuild. I expect that it needs the
device-maper functionality provided by lvm, in which case you can set the
device-mapper-only USE flag to avoid installing the full lvm suite.


Why isn't device-mapper it's own package‽  One which LVM depends on.

Multi-Path (as in dm-multipath) can easily be used without LVM.



Re: [gentoo-user] Re: line wrap over in xterm/konsole

2019-06-22 Thread Grant Taylor

On 6/22/19 2:13 AM, Mick wrote:

These USE flags are the same like mine.


ACK


I don't think it is a shell related problem (but may be wrong).


I think we need to be very careful and specific what part we think is 
shell (thus possibly readline) related vs terminal emulator related vs 
something else.  (See my recent reply discussing an old school TTY 
terminal.)


After all changing the shell option in .bashrc does not affect the 
display within the xterm window.


"shell option in .bashrc"???

Are you launching a different shell from Bash?  (.bashrc is inherently 
Bash.)


Or are you using Bash as your interactive shell and using a different 
shell for sub-commands / forks / etc.?


This is the problem I was describing as 'annoying'. Xterm draws the 
output once to fill in the real estate of the current xterm window, 
but changing the window width does not redraw each line to reflow it 
across the new window width.


Agreed.  This is the behavior I've seen (and expected) from XTerm for 20 
years.



Apologies for my confusing description - I'll have another go below.


;-)

Confusion is okay as long as we work to clarify things.  That's part of 
communicating effectively.


I ran ldd and as is logical I can see there are some differences in 
the libs used by both programs.  Neither of them use libterm.


I was fairly certain that XTerm did not use libterm.  I didn't know 
about (u)rxvt.  I think some other—possibly more common—terminal 
emulators do use it.


Aside:  I consider urxvt to be a Unicode varient of rxvt.  Much like 
uxterm is a Unicode varient (mode)of xterm.  To me, both of these pairs 
are largely interchangeable for the conversation that we're having. 
Please correct me if you think I'm wrong on this point.


In my systems urxvt will wrap lines when shrinking the width of 
the window AND unwrap them when increasing the width of the window. 
This is happening in real time as the window expands/contracts.


This is the behavior that I'm seeing with urxvt as well.

I have never seen XTerm exhibit this behavior.  (At least not without 
something else inside of XTerm that does it, e.g. screen or tmux.)


Again in my systems xterm will truncate lines when shrinking the width 
of the window.  This truncated output is now lost.  Increasing the 
width of the window will not restore the truncated lines.  Scrolling up 
will now draw lines in the new full width of the xterm window, but 
the truncated lines remain truncated and their information is lost.


Agreed.  This is what I've seen and come to expect from XTerm after 
using it for 20 years.




<    1   2   3   4   5   >