Is this diatribe really necessary? What is important here is that we all
have a safe and secure means of communication outside the snooping
capabilities of the ubiquitous NSA and its foreign affiliations.  If no
one is concerned about intelligence people making contributions to the
code, why not simply invite the NSA to set the GNUpgp protocols, or
better still just send them your private key?

Have you forgotten Edward Snowden's revelations about massive
unrestricted spying? The NDAA act in America allows anyone to be
detained indefinitely for any reason and the latest rider on the
appropriations bill permits the US government to spy on anyone
anywhere.  Isn't that a police state?

We're getting off track in this forum...the mission should be to provide
an open source platform that is not cost effective for the NSA and other
operations groups to have to spend the time to decrypt.

ED

Robert J. Hansen:

> As soon as your opinion of me matters a whit, I’ll let you know.  Until then, 
> regardless of whether you think it’s a lame excuse, it will continue to be my 
> policy.

An argument, why your policy is based on one irrational human would be nice.
It may be the case, that people don't want intelligence people on
software, but as far as it's public, you are not. There is one email of
a probably crazy human, who maybe claim the opposite, and that's why you
don't want to contribute. Please clear me up. I think I don't understand.

> I will let other people chime in with their thoughts on your opinion of, 
> “sure, let’s invite intelligence professionals to make code contributions to 
> GnuPG and Enigmail.”

That was not my statement. Please don't wrench the meaning of my
opinion. I still can't see, why you can't contribute to the project.
I don't want to actively invite intelligence professionals to make code
contributions.
Besides that, you can't proof, that INT-people aren't actively
contributing code of your software.
The code is at least open source. In the best case, inserted bugs can be
located and the code contribute can be identified. Assuming that people
are contributing with their real identity, it's not really very
convenient to be responsible for code, which led to heavy vulnerabilities.

Just one example for (code) contributing, where I don't have a problem
with, if an untrusted person does it: Establishing common coding
standards, (weak) refactoring the code (in the best case, the execution
path is not touched). It relatively easy to check, if the commit is
trustworthy. Agree?

I wonder why you haven't answered this part of my email:

> You are a software engineer. Have you already looked at the code and
> gave recommendations to make the code better? You don't have to
> contribute code in this case. Maybe an opinion about the code could
> bring the project a step further. I don't if this already happened.
> Then you won’t have a community.  Human beings aren’t rational.  The quickest 
> way to have no community is to insist that to be part of the community you 
> must lack human failings, like irrationality.

So where is the community, who don't like, that you should not
contribute code? You are already contributing to Enigmail, because
you're in the usability team. Maybe you are already giving the
development team recommendations on the software in terms of usability.
Why not giving recommendations in terms of code quality?

> Given the FBI is an intelligence agency, then yes, I’ve hung out with 
> intelligence types since I was about eleven.

I think it's not easy in a police state, to get not in contact with
intelligence agency, willingly or not willingly.

> I understand you don’t think the FBI is an intelligence agency.  You’re wrong.

The problem is that you are not part of an official intelligence agency.
If you are, the discussion is meaningless.
I guess there are not many restrictions on intelligence and police
capabilities in the USA. Therefore, any agency who claims to make the
state more secure, possibly has some intelligence capabilities.

> Secret knowledge isn’t required:

Haven't claimed the opposite. There are INT-missions, which can only be
done by state-agencies, or state-agencies with private accomplices.

> Nor is working for the state a prerequisite:

Haven't claimed the opposite.

The secret services of the USA are working with private companies (e.g.
Booz Allen Hamilton) together, there are thousand of people in the
private sector who are keeping state secrets.

> You seem to have a view of the world which requires that intelligence be 
> state-sanctioned, clandestine or covert, etc., etc.  It’s not.  All being in 
> the intel business means is you’re acquiring information on behalf of 
> decisionmakers — that’s all.

Nope, but I think there is a qualitative difference, if a company is
spying for different reasons (industrial espionage) or a state is
building huge spy programs, using acts to compel companies to aid and
gaining huge masses of data from the backend.
Besides that: I bet there are several laws in your country, which are
prohibiting some varieties of private spying.
Again: Qualitative difference.

> In the modern era, we are *all* in the intelligence business… a fact which, I 
> think, is poorly recognized by the world.

Depends on your definition of "intelligence business". If your using
weak (rather pathetic) definitions of "intelligence business" and
calling every person, who already used a telephone book a person, who is
in the intelligence business, it is easy to come to this conclusion.
Of course this is wrong. Most people, who are using telephone books,
search engines, or Facebook to stalk other people don't have such
powerful capabilities as secret services. Don't you think so?

In my opinion this definition has one purpose: Downplaying secret
services, because, well, we are the same!
I think there are democratic people out there, who would not spy
democratic institutions and would accept damaging these. I wouldn't call
them as people in the intelligence business.

----------------

@At the people who are using this thread to thank Enigmail: Was I really
that pessimistic and destructive? To emphasize it: It's very good, that
Enigmail exist and I also have to say thank you to the developers. It's
probably the best implementation of email-encryption using OpenPGP.
Still there are many issues, which could be solved.
As already mentioned: You are not warned about, that you could using the
wrong key. It's like you would accessing a webserver with a self-signed
certificate without getting a warning. Is it really that hard and user
unfriendly to do something against it?
Revoked keys are marked red in the certificate list, when sending an
encrypted email, why not marking unverified certificates yellow?

> If I get you right, then you're telling me that you did like Enigmail
> in the past, even though I'm a stubborn developer who does not listen
> end users and ignore bug reports and feature requests systematically.
> This sound a bit weird to me, to say the least.

No. Let me revise: The certificates where not accessible in past
Enigmail versions, as soon as they were not valid (signed). Because this
was seen as user-unfriendly, all keys has been made valid by default. Is
that correct?
If that is the case I can understand, that this very user unfriendly. A
beginner probably don't know, that he has to verify the certificates and
an attack could be successful. But why in the consequence, the decision
was made, that all certificates should be valid by default?
Why not informing the people about the problem and let them know, how
they can deal with the problem?
I'm not saying this is a perfect solutions, but it's better. Only a few
people may understand the certificate errors when browsing, but if
someone cares, he probably will not rashly adding an exception.
Do you know OTR? It's Pidgin plugin is admittedly not a perfect
usability friendly solution, but I think it makes some things more
clear: If your encrypting your communication, without checking
fingerprints or similar, the connection is unauthenticated and marked
with a warning sign and a yellow font.
There could be some more text, but I think that's a step in the right
direction.

> Better documented or clearer code would
> not help. It simply boils down to the way enigmail can work within
> Thunderbird and calling the underlying GnuPG which sometimes takes a
> while, which can be several ten seconds, depending on the operations it
> performs.
>
> The only thing we actually can do is document this behaviour in our
> handbook and the FAQ (note to self). So your report actually will lead
> to some improvement.

Okay, it's correct that better documented or cleaner would have helped
in this situation.
This bug is marked as wontfix, it seems, that you can't do anything
against this situation.
If there was any discussion on the mailinglist about this issue. Sorry,
I haven't read it:
The situation as I understand it: Enigmail can't control the behaviour
of GnuPG. If GnuPG takes long for certain actions, Thunderbird could
think, that Enigmail is not running correctly and suggest continuing or
stopping Enigmail.
If we enumerate all cases, where GnuPG takes long for certain actions,
these actions can probably forced before doing any critical actions.
This could decrease the probability, that this window alert pop up in
the wrong time.
If all cases can be enumerated and can be forced, it should be checked,
if it's possible to deactivate the send-button on Thunderbird. If it's
possible, the send-button is deactivated, before Enigmail is forcing
GnuPG's risky actions to get in a fail-safe condition, if the user stops
Enigmail. Is that possible?

Regards,
Kristy


____________________________________________________________
The #1 Worst Carb Ever?
Click to Learn #1 Carb that Kills Your Blood Sugar (Don't Eat This!)
http://thirdpartyoffers.juno.com/TGL3131/54aab41980f0d340d67f5st02vuc
_______________________________________________
enigmail-users mailing list
[email protected]
To unsubscribe or make changes to your subscription click here:
https://admin.hostpoint.ch/mailman/listinfo/enigmail-users_enigmail.net

Reply via email to