On 8/21/20 5:58 PM, Rich Freeman wrote:
It is what just about every other modern application in existence uses.

VoIP does not.

No RDBMSs that I'm aware of use it as their primary protocol. (Some may be able to use HTTP(S) as an alternative.)

Outlook to Exchange does (did?) not use it. It does have a sub-optimal RPC-over-HTTP(S) option for things like mobile clients. But you still get much better service using native non-HTTP(S) protocols.

I'm not aware of any self hosted enterprise grade remote desktop solution that uses HTTP(S) as it's native transport.

Just because it's possible to force something to use HTTP(S) does not mean that it's a good idea to do so.

I'm sure there are hundreds of articles on the reasons for this that are far better than anything I'd come up with here.

Probably.

And they don't work for anything OTHER than SMTP.

There are /other/ libraries that work for /other/ things.

Having a general thing that can be abused for almost all things is seldom, if ever, the optimal way to achieve the goal.

A library for JSON/webservices/whatever works for half the applications being written today.

I choose to believe that even that 50% is significantly sub-optimal and that they have been pressed into that role for questionable reasons.

This is simple.  This is how just about EVERYBODY does it these days.

I disagree.

Yes, a lot of people do. But I think it's still a far cry from "just about everybody".

http works as a transport mechanism.

Simply working is not always enough. Dial up modems "worked" in that data was transferred between two endpoints. Yet we aren't using them today.

Frequently, we want optimal or at least the best solution that we can get.

That is the beauty of standards like this - somebody else figured out SSL on top of HTTP so we don't need an email-specific reimplementation of that.

I think that you are closer than you realize or may be comfortable with.

"somebody else figured out" meaning that "someone else has already done the work or the hard part". Meaning that it's possible to ride people's coat tails.

HTTP(S) as a protocol has some very specific semantics that make it far from ideal for many things. Things like the server initiating traffic to clients. Some, if not many, of these semantics impose artificial limitations on services.

I mean, why use TCP?

For starters, TCP ensures that your data arrives at the other end (or notifies you if it doesn't), that it's in order, and that it's not duplicated.

There are multiple other protocols that you can use. UDP is a prominent one.

Why not use something more tailored to email?

Now you're comparing an application layer protocol (email(SMTP)) to a network transport protocol (TCP / UDP / etc.).

What transport layer protocol would you suggest using?

Be careful to trace higher layer abstraction protocols, e.g. QUIC, back down to the transport layer protocol (UDP in QUIC's case).

TCP probably has a dozen optional features that nobody uses with email, so why implement all that just to send email?

What contemporary operating system does not have a TCP/IP stack already?

How are applications on said operating system speaking traditional / non-QUIC based HTTP(S) /without/ TCP?

Even exclusively relying on QUIC still uses UDP.

The answer is that it makes WAY more sense to recycle a standard protocol than to invent one.

You're still inventing a protocol. It's just at a higher layer. You still have to have some way / method / rules / dialect, more commonly known as a protocol, on whatever communications transport you use.

Even your web based application needs to know how to communicate different things about what it's doing. Is it specifying the sender or the recipient or the subject or something else. Protocols are what define that. They are going to exist. It's just a question of where they exist.

You're still inventing a protocol.

Do you want your protocol to run on top of a taller more complex stack of dependencies? Or would you prefer a shorter simpler stack of dependencies?

You're still inventing a protocol. You're just choosing where to put it. And you seem to be in favor of the taller more complex stack. Conversely I am in favor of the shorter simpler stack.

If SMTP didn't exist, and we needed a way to post a bunch of data to a remote server, you'd use HTTP, because it already works.

No.  /I/ would not.

HTTP(S) is inherently good at /retrieving/ data. We have used and abused HTTP(S) to make it push data.

Depending what the data was and the interactive nature of it, I would consider a bulk file transfer protocol FTPS / SFTP / SCP or even NAS protocols. Things that are well understood and well tested.

If I had some reason that they couldn't do what I needed or as efficiently as I needed, I would develop my own protocol. I would certainly do it on top of a transport protocol that ensured my data arrived (or it told me), in order, and not duplicated.

I could develop my own protocol that allowed me to start from a clean slate and did not impose rules on what I could say or how I would say it. HTTP(S) starts with impositions of what you can say and how you can say it.

If you are creating something completely green field that rides entirely on top of HTTP(S), you have issues with bidirectionality and / or latency and / or request reply correlation by how the HTTP protocol operates.

TCP is so easy to use that simple Bash shell scripts can utilize it by opening files.

TCP already works too. And TCP works without the additional overhead of HTTP stack on top of it.

It isn't just the trust problem.  It is also the ease of use problem.

I believe that the trust part of the problem is significantly more difficult than the ease of use part of the problem.

Let's assume you trust all the CAs out there. Why isn't anybody using SSL for email?

My personal opinion is that laziness and / or ignorance are the two biggest impediments.

It is built into half the email clients out there.

I suspect that it's significantly more than half. I would speculate that it's more like > 90% of the mainstream email clients support encrypted communications channels; STARTTLS / SMTPS / IMAPS / POP3S / LDAPS.

I speculate that all contemporary desktop email clients support at leat one form of encrypted message; S/MIME and / or PGP.

Mobile clients may be more problematic. I know that Apple's Mail.app on iOS has supported S/MIME for as long as I've been using it, 10+ years. I don't know about the Android ecosystem. I would be shocked if there wasn't an email client that supported S/MIME. PGP may be more spotty.

The problem is that exchanging certificates at an individual level is hard.

No it is not.  Not with a decent S/MIME implementation.

I send someone a /signed/ message, and their reputable MUA learns my public key from my signature. (With requisite PKI/CA signing my keys.)

Then when that person sends me a message, their reputable MUA will automatically encrypt messages to me using my public key.

I can even learn their public key from the signature that they apply to the message that they encrypt to me.

This is, or can be, almost completely transparent to end users. The only non-transparent part of of actual day-to-day use of S/MIME is the occasional "Would you like to use $SENDER's public key when communicating with them in the future?" type pop-up. Thunderbird with one add-on streamlines even that.

Even installing the certificate is not any more difficult than original email account configuration. Admittedly it needs to happen again at certificate renewal time. So there is some room for improvement here. But even this is not a high bar. I don't believe this is a barrier to adoption.

The most difficult (read: annoying) part of S/MIME is acquiring the S/MIME certificate. Even that's not unduly difficult.

Oh, for an organization with an IT department it isn't a problem. The IT department can just issue a certificate to everybody, generating keys for them. Then it can stick all the certs in some kind of directory that their email clients all use.

Doing anything with S/MIME (or PGP) at scale like you would as an individual is obviously untenable. But there are other options that help with businesses. It's possible to acquire a certificate that will allow you to sign other certificates. (It don't think it's a code-signing-certificate, but rather something quite similar.)

Getting your cousin to use SSL certs for their email is a whole different matter.

This is where the aforementioned "ignorance" comes in. I don't mean anything against your cousin, or anybody else for that matter. But the simple lack of simple understanding of what it is, why it's important, or how to do it is, in my humble opinion, the BIGGEST impediment to this.

I mean "ignorance" in the way that most 3-5 year old kids are ignorant about how to read. They simply have not been exposed to or learned how to do it yet. There is no malicious intent.

Heck, just using it between two different companies, both supported by professional IT depts, is a pain.

I see no reason for it to be a pain. Can someone acquire the certificate(s) on the employees behalf? Can someone install them in the employees email client?

Thunderbird with one simple add-on will automatically sign messages by default.

As previously stated, sending a signed message contains sufficient information to boot strap S/MIME.

The same add-on will automatically encrypt message to recipients that Thunderbird has seen a (current) public key from.

So the end user simply needs to open Thunderbird and send an email. Which I'm fairly sure that there are hundreds of thousands of people doing already.

So, I'm back to ignorance and laziness. Some of the ignorance is out of laziness and may be hedging on malicious complacency.

Businesses /can/ make encryption trivial to use. Debian made it trivial to do so when I signed up for something. The portal that I used had an option to upload my public key. Uploading my public key is all that was required to receive S/MIME (or PGP) encrypted email from them.

It is possible to make end-to-end encrypted email much less of an affront than it is or worse, what many people seemingly believe that it is.

So, again, back to ignorance and / or laziness.

You could use TLS for the transport itself...

Yes.

This is and has been best practice for a decade or more.

... have the email servers configured to only use TLS for those domains,

This is currently a manual process.

I would like to see some more automation around this.

I believe there is some work effort with DNS-based Authentication of Named Entities (DANE) -and- MTA support to dynamically learn about and enforce TLS requirement.

but actually doing E2E encryption is hard with email because there aren't a lot of standards for key exchange across unrelated orgs.

S/MIME is self bootstraping. Signed emails, which can be sent to anyone, provide the sender's public key. There is no need for any external involvement. MUAs can completely handle this.

DNSSEC is:

1.  Still poorly supported even where it makes sense.

Yet another example of ignorance and / or laziness.

I have found DNSSEC to be relatively easy to implement, and trivial to enable on my recursive resolvers.

The ignorance portion is relatively easy to resolve if people want to. I highly recommend DNSSEC Mastery by Michael W. Lucas. That $20 (?) book and moderate amounts of motivation is all anybody that wants to implement DNSSEC /needs/.

2.  A great hypothetical solution for the 0.002% of email users who
own a domain name, like you and me.

DNSSEC can be used for FAR more than just email domains.



--
Grant. . . .
unix || die

Reply via email to