Murray S. Kucherawy wrote in
 <CAL0qLwby=f5s8yezoahbj3oroc4zlx3t_unt6x5tvtjduot...@mail.gmail.com>:
 |On Mon, Feb 5, 2024 at 1:39 PM Steffen Nurpmeso <stef...@sdaoden.eu> wrote:
 |> If a graphical user interface gives you a green "ok" button to
 |> click, or "red" otherwise, that is even better as in browser URL
 |> lines.  Then pop up a tree-view of message modifications and
 |> alertize where it broke, checkbox for is-this-really-an-evil.
 |
 |I remember seeing a study presented at a conference that showed people
 |sometimes[*] click on links found in their spam folders.  If the act of
 |moving a suspect message out of the way is not enough to protect users from
 |bad actors, I can't imagine how a green-yellow-red light icon in the inbox
 |is going to do any better.
 |
 |-MSK
 |
 |[*] By this, I mean a statistically significant portion of them do.  The
 |number I remember is 18%, but I wouldn't be shocked to see it higher.

Me neither.
But then, it is always easy to say this.  Whereas in practice, one
has "one of these days", one was left by his girl friend, someone
had a family member died, or mortally ill, is on drugs, just with
luck escaped a traffic accident and is hormonally overwhelmed, or
bored to death because the gambling hall has it closed day.  Ie my
neighbour here in the flat where i work plays for many hours
a day (but not professionally).
A study on spam folders is a thing, in the Google account they
moved Unicode/CLDR forum messages to spam (they were right, but
they pay to have a major vote).

However, here would be the real thing to step in, in my opinion,
in that the browser/graphical thing treats such user requests
inside declared spam folders with precaution, and adds additional
warnings, and maybe isolates the "opened whatever" very strictly.
(I personally have my normal browser in one "container", and one
with accounts, passwords and such, in another.  The filesystem
overlays do not cross.  What i can do.  But, *i* think, that would
be what makes up a *really* good interface; ie, protect against
weak moments if it is so "easily doable", and if the visited porn
site says it captured your box and you need to pay or everybody
knows, it is plain not true.  Then again, shall the browser ask
for video or microphone or password access (what to i know) for
followed spam folder links, even though i disabled that?  How
annoying.  I do not know.)

But back to the graphical lock symbol of browsers.  They visualize
certificate protection, and what does it really tell me despite
that someone paid for a certificate.  All a "normal" person can do
is to say "well, that is surely right", or, an american, maybe
even accepts it as such due to the system, like people seem to
refrain to disable Tesla data collection because of the speech in
the manual (talking the Mozilla car "study"), which effectively
says "data collection is part of our business, and if you like the
product and all that, it helps us getting better, etc etc", if
i understood that right.

I am pretty certain that if certificates would really "belong" to
domains, just like a DKIM public certificate belongs to a domain,
ie directly, and if there would be a good documentation, then
people would get a certainty, over time, you know.  You build up
trust.  This is what is in a MUA for S/MIME or PGP failures.  And
possibly in some for DKIM failures.  A direct failure, and
a noticeable one.

But given the noise that browsers make if a certificate cannot be
attested, maybe the lock symbol is pretty useless if the browser
warns on non-TLS-secured connections that pass authentication data
or the like.  I must admit that has not happened here for a long
time.

I continue to claim that in an email program, if there is a strong
visual indicator that a signature could not be verified, under
normal conditions, people would pay more attention to what they
do, than to a non-remarkable indicator (or, hm, none at all).

Maybe they would be more careful too, if any message looked at
within the spam folder would get such an indicator, each time anew
(ie off-on) when a new message is opened therein.  That would be
interesting.  It is all about consciousness, which can be trained
or "highlighted", i would think, all those studies aside.
For example the Japanese train drivers have to consciously point
their finger at what they are doing next.  (After some accident in
the past.)  They get trained to do that.  (I think they are also
now permanently video watched, at least black-box-alike, but i am
not certain.)

And, *if* there would be a cryptographically verifiable stack of
the message, say user A sends to M-L B sends to receiver C, and if
the signature fails, and if the signature of A was right before
B but now fails (ie all that the usual DKIM+ business), and if
i have a single traffic-light field that indeed is a button, and
upon press it shows that stack in tree-view, for example, with
traffic-light colours again, and then gives the option, beside the
red marker for M-L B, saying "accept that this sender breaks elder
signatures", or the like, and the user interfaces incorporates
this decision for later checks, that would be a good thing.  And
if *then* the signature was broken beyond, then an alert would
make for attention again, that i also think.  (In effect the
necessary code would be surprisingly little, and require a domain
name and a bit flag, much less than eg MTA-STS or DANE require.
And buttons and tree views, of course.  Assumed that the program
tests signatures etc already.)

--steffen
|
|Der Kragenbaer,                The moon bear,
|der holt sich munter           he cheerfully and one by one
|einen nach dem anderen runter  wa.ks himself off
|(By Robert Gernhardt)

_______________________________________________
Ietf-dkim mailing list
Ietf-dkim@ietf.org
https://www.ietf.org/mailman/listinfo/ietf-dkim

Reply via email to