Re: Your Thoughts

2019-07-03 Thread Leo Gaspard via Gnupg-users
Alyssa Ross  writes:
>> > For example, why isn't ask-cert-level a default?
>>
>> For an alternative view on ask-cert-level see also:
>>
>> https://debian-administration.org/users/dkg/weblog/98
>
> Oh, interesting. Thank you for showing this to me. I had it in my head
> that a "weak" signature would count as a marginal in the web of trust,
> but I suppose I was wrong about that.
>
> In that case, I agree that ask-cert-level doesn't make sense as a
> default.

Well, that's also an ecosystem issue, and if I'm not mistaken this
thread (or was it another one?) was going quite far in the “let's fix
the ecosystem and keep the standard” direction.

“weak” *could* be used for verification. For instance, if I were to
write an OpenPGP client, I'd likely make it so that:
* Trust (which is 0-255 in the standard) is a slider with marks like “I
  trust not at all|a bit|a lot| completely” (with a proper sentence so
  that people understand, not like what I'm putting here)
* Signature level (4 levels in the standard) is a similar slider
* Both trust and signature level are mapped to a [0, 1] value, and
  multiplied to get the amount of confidence we have thanks to this
  particular signature that the key is correct
* All such amounts of confidence get added, and the “3-marginals or
  1-full” rule becomes a simple number that needs to be passed with this
  addition (also configured as a slider with some “normal user / … /
  paranoïd user” landmarks)

(for trust signatures, in such a scheme they'd first be cut off to
follow the OpenPGP certification, and then get multiplied by the length
of the path, to account for decreasing trust along longer paths)

This is compatible with RFC4880 (well, except it doesn't respect the
“SHOULD” that full trust is 120 and marginal 60, because it actually
uses the whole range).

So ask-cert-level might make sense as a default. Just not as GnuPG's
default, as GnuPG doesn't have such a behavior (and no client that I
know of currently do). But someday, maybe.

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: SKS Keyserver Network Under Attack

2019-07-01 Thread Leo Gaspard via Gnupg-users
Mirimir via Gnupg-users  writes:
>>- Embeds a hardcoded list of already-disrupted keys for which packets
>>  should be filtered-out when serving them
>
> That's what I meant. Plus some mechanism for testing keys, so poisoned
> ones are blocked, as soon as possible.
>
> It'd also be useful for SKS to report "this key has been poisoned", and
> suggest alternative sources, rather than just failing silently.

I think we can't rely on humans actually reading the output, even if
GnuPG was able to display the output on eg. `--refresh-keys` in a way
understandable by a human.

Also, the aim of my suggestion was to actually *not* block the
keys. This second point of part 1 is there to just filter some hardcoded
list of packets, thus making key updates still propagate.

The first point was there to prevent additional keys from being
poisoned, and the second to limit the damage caused by already-existing
keys -- the first one is unfortunately quite necessary, as
sks-keyservers can't reasonably be coordinating changes on the ~220
keyservers every time a new key gets poisoned (or even twice, for that
matter, one flag day is already bad enough)

>> Do you think such a plan might be reasonably done, to at least keep the
>> most basic functionality of not breaking existing installations and
>> allow them to keep revocations flowing? The biggest issue I can see is
>> it requires a quite big amount of development work.
>
> But less work than actually fixing SKS, right?

Well, nowadays “fixing SKS” means “making hagrid able to synchronize its
cryptographic-only content and propagate third-party signatures under
some circumstances”, as far as I understand.

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: SKS Keyserver Network Under Attack

2019-06-30 Thread Leo Gaspard via Gnupg-users
> 1. We would have to ensure that all keyservers block the same
> uploads. One permissive keyserver is a backdoor into the entire
> system. We can’t block bad keys at reconciliation time for the same
> reasons that have been hashed to death already.

One way to do that, though it would mean officially sunsetting SKS,
might be to:

1. Publish an update to SKS that:
   - Blocks all uploads of any packet that is not a revocation signature
 packet (maybe also having to check the revocation is actually
 correctly signed, to avoid flooding of revocation packets to become
 the new flaw)
   - Embeds a hardcoded list of already-disrupted keys for which packets
 should be filtered-out when serving them

2. Pressure keyserver operators into updating

3. Kick all not-up-to-date keyservers from the pool after some delay
   deemed acceptable (shorter means less broken GnuPG, longer means less
   keyservers kicked out)
   Note: I do not know how the pool is handled. Maybe this would mean
   that the new update to SKS would need to stop synchronizing with
   earlier versions, and that the hkps pool should be turned off until
   enough keyservers are updated to handle the load?

Do you think such a plan might be reasonably done, to at least keep the
most basic functionality of not breaking existing installations and
allow them to keep revocations flowing? The biggest issue I can see is
it requires a quite big amount of development work.

Hope that helps,
  Leo

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: [NIIBE Yutaka] STM32F103 flash ROM read-out service

2018-06-06 Thread Leo Gaspard via Gnupg-users
On 06/06/2018 06:56 PM, NdK wrote:
> Il 06/06/2018 17:49, Tom Li via Gnuk-users ha scritto:
> 
>> BTW, BasicCard and JavaCard seemed even more obscure and I cannot find
>> any public service of cracking.
> Because those are (at least should be) based on secure chips.
> 
>> But it does not solve any real problem in the perspective of cryptography.
>> They are all "security through obscurity" at best, just driving out script
>> kiddies (or equipment kiddies?) at worst.
> The only secure (even against decapping attacks) device I know of is a
> very old parallel-port "key" a friend described me ~25y ago.
> It was made of 3 silicon layers: the outer ones only contained interface
> circuits and 'randomness' while the keys and the logic were in the
> central layer. Trying to remove the outer layers destroyed the random
> patterns that were used as 'internal master key', rendering the rest
> completely useless.

Some people do reverse-engineering based on photons emitted by
transistors switching. These would get through this shielding.

Unfortunately, I can't find again the link to the conference talk where
I heard people explaining they did that… sorry.

Another kind of attack would be EM pulses / lasers for error-ing a
crypto computation, that would get through this shielding too.

There are defenses against these attacks (well, for the
transistors-emitting-photons attack I'm not really sure), that are
deployed in secure elements. Attacks like this are tested by CC/EAL
evaluation laboratories.

All that to say: hardware security, to me, is a way to increase the cost
of the attacker to perform an attack. All devices are eventually
vulnerable, given the right price, the point is to make attack more
costly than the benefit from breaking the card and/or than finding a way
around the card. (I'm not going to extend this point to software
security, but I'd think it most likely holds there too)

Oh, and also to say: choosing between a non-secure-element open-source
token and a secure-element NDA-ed-and-thus-non-open-source token is not
an obvious choice.

HTH,
Leo

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Breaking changes

2018-05-22 Thread Leo Gaspard via Gnupg-users
On 05/22/2018 11:48 PM, Dennis Clarke wrote:
> On 05/22/2018 05:38 PM, Dan Kegel wrote:
>> Lessee...
>> https://en.wikipedia.org/wiki/GNU_Privacy_Guard
>> already give an end-of-life date for 2.0, but none for 1.4.
>> And since Ubuntu 16.04 includes 1.4, there are likely
>> to still be a few vocal 1.4 users out there.
>>
>> How about announcing an end-of-life date for 1.4 that
>> is in the future (say, by 3 to 6 months)?
> 
> Too fast. Think 12 months as a minimum. There is prod code
> out there running for years and a timeline that allows proper
> project schedule/costing/testing would be better.

If the announced end-of-life is 12 months, then people will complain for
9 months, and maybe start working on migrating during the last 3 months.

I mean, I'm still seeing people actively developing python2 code bases
without even thinking of migrating to python3 *now*, and retirement was
initially announced for 2015…

The longer you leave people with maintenance, the longer they will want
maintenance past the deadline.

I think 3-6 months is more than enough, and if people can't manage to
update their production code in this time frame they can live with an
un-maintained GnuPG until they finish migrating (unless they want to pay
for paid support for continued 1.4 maintenance, that is).

I don't have a personal opinion, but dropping 1.4 appears to have two
advantages to me: first, it reduces the voluntary maintenance burden,
and second, it may help gather funding for work on 2.3, if people choose
to contract with g10code for continued maintenance.

GunPG 1.4 has been out for way longer than necessary, and people are
never going to migrate out of it unless they are forced to.

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Efail or OpenPGP is safer than S/MIME

2018-05-14 Thread Leo Gaspard via Gnupg-users
On 05/14/2018 09:45 AM, Werner Koch wrote:> The topic of that paper is
that HTML is used as a back channel to create
> an oracle for modified encrypted mails.  It is long known that HTML
> mails and in particular external links like 
> are evil if the MUA actually honors them (which many meanwhile seem to
> do again; see all these newsletters).  Due to broken MIME parsers a
> bunch of MUAs seem to concatenate decrypted HTML mime parts which makes
> it easy to plant such HTML snippets.

The full details appear to be out [1].

If I read it correctly, it also has another attack, no longer based on
user agents concatenating HTML mime parts, but also based on CFB
gadgets. Which, here, looks like a flaw in the OpenPGP specification
indeed (and thus GnuPG's implementation of it), and not in MUAs?


[1] https://efail.de/

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: key distribution/verification/update mechanisms other than keyservers [was: Re: a step in the right direction]

2018-01-16 Thread Leo Gaspard
On 01/16/2018 10:56 PM, Kristian Fiskerstrand wrote:
> On 01/16/2018 07:40 PM, Daniel Kahn Gillmor wrote:
> 
>> The keyserver network (or some future variant of it) can of course play
>> a role in parallel to any or all of these.  for example, keyservers are
>> particularly well-situated to offer key revocation, updates to expiry,
>> and subkey rotation, none of which would necessarily involve names or
>> e-mail addresses.
>>
>> It would be interesting to see a network of keyserver operators that:
>>
>>  (a) did cryptographic verification, and rejected packets that could not
>>  be verified (also: required cryptographic verifications of
>>  cross-signatures for signing-capable subkeys)
>>
>>  (b) rejected all User IDs and User Attributes and certifications over
>>  those components
>>
>>  (c) rejected all third-party certifications -- so data attached to a
>>  given primary key is only accepted when certified by that primary
>>  key.
>>
> 
> thanks for this post Daniel, my primary question would be what advantage
> is gained by this verification being done by an arbitrary third party
> rather by a trusted client running locally, which is the current modus
> operandus. Any keyserver action doing this would just shift
> responsibilities to a third party for something better served (and
> already happens) locally.

I guess one argument could be “when lazy people use the keyserver
network, they likely get not-too-bad data”.

I seem to remember that when a keyserver returned multiple keys to
GnuPG, GnuPG imported both, even when searching for a fingerprint and
the fingerprint didn't match, at some point in the last few years? If
even GnuPG can fail at properly sanitizing the data received by
keyservers (and I hope I'm not mistaken in this memory), I guess having
such keyservers only serve curated data when used in their “nominal” use
could help a bit.

It could also help limit the impact of the nightmare scenario RJH has
described, by making sure all the data is “cryptographically valid and
matching”, thus making it harder to just propagate arbitrary data down
the network. Then I don't have the structure of an OpenPGP key in mind,
so maybe this would not work due to fields in the key that could be set
to arbitrary data anyways

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: DRM?

2018-01-16 Thread Leo Gaspard
On 01/16/2018 05:42 PM, Robert J. Hansen wrote:
>> The mechanism to prove you are the owner of a public key is pretty much
>> in place :-). A mechanism where you can have a signed statement saying
>> "on 2018-01-16, I allow my key to show up on keyservers"
> 
> It is theoretically and practically possible to have a keyserver that
> honors such requests, but what many people want is *enforcement*.  Not
> merely a voluntary system that's trivially circumventable, but some
> mechanism by which their public keys can be actively kept out of
> circulation.

Well, if such requests were honored, this would fix the OP's answer (ie.
“how do I hide the fact I mistakenly associated two unrelated UIDs on my
key”, if I understood correctly), as well as requests pertaining to the
EU's “right to be forgotten” (modulo people who would have lost their
private key and still claim this right, but I guess the extraordinary
measures taken for the last time it was invoked would still be possible).

So that's at least a good part of the current problem solved, I think --
though obviously nothing close to the nightmare scenario or people
wanting to DRM their keys.

Also, there are flaws with this approach (like after a private key
compromise, it would allow to prevent dissemination of the revocation
certificate) [1], but fixes like allowing the statement to be “on
2018-04-01, please expose only the master key and its revocation
certificate(s) to clients” would likely handle this particular issue.

All I'm saying is that a system like this one is not a silver bullet
solution, but may handle a few of the current complaints against the SKS
network?


[1] It looks like Kristian has written more about it during my typing
this mail if I can guess from Peter's answer, though Kristian's mail
didn't land in my mailbox yet.

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: a step in the right direction

2018-01-16 Thread Leo Gaspard
On 01/16/2018 09:20 AM, Robert J. Hansen wrote:>> should not be viewed
as "discussing a [...] nightmare scenario",
> 
> I am darkly amused at someone who has not done the research into what
> the nightmare scenario *is* telling me that it's not a nightmare scenario.
> 
> The nightmare scenario is malcontents realize the keyserver network is a
> multijurisdictional, redundant, distributed database from which data
> cannot be deleted... and decide this makes it an ideal way to distribute
> child porn.  The moment that happens, the keyserver network goes down
> hard as every keyserver operator everywhere gets exposed to massive
> criminal liability.
> 
> We've known about it for several years.  We've been thinking about how
> to counter it for several years.  It turns out that countering it is a
> *really hard job*.  If you make it possible to delete records from a
> keyserver, you open the door to all kinds of shenanigans that
> governments could force keyserver operators to do on their behalf.

I think this may be the reason why others than you are much more
optimistic than you about all this: I don't think we are considering
this scenario, only the much more restricted case of “I want to remove
information associated with my private key”. In which case there is no
need of trusted introducers who would be allowed to moderate data, or
anything like this: the owner of the key could just sign the deletion
token with the said key.

Handling network-wide censorship of information published is a much
harder scenario, as the network was designed to be censorship-resistent.
And it looks like a nice property we would want to keep at network-level
in my opinion, though in order to accomodate local jurisdictions
keyserver operators could maybe want not to store eg. photo IDs or the
like -- just like if I understand correctly the case of someone sueing
to get his key removed from keyservers lead to the addition of an option
for keyserver operators to refuse syncing certain keys.

That said, I did read your “To implement this would require a completely
new keyserver implementation, […]” ; this message was just to maybe cast
some light on why some people look much more optimistic about this than
you are.

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Remove public key from keyserver (was: Re: Hide UID From Public Key Server By Poison Your Key?)

2018-01-15 Thread Leo Gaspard
On 01/15/2018 08:13 AM, Robert J. Hansen wrote:>> Since you can never remove
>> anything from the public key server, You are
>> wondering if you can add something to it -- for
>> example, add another 100 of UIDs with other
>> people's real name and emails so people can not
>> find out which one is yours, and append another
>> 100 of digital signature so people get tired
>> before figure out which one is from valid user.
> 
> I rarely use language like this, but this time I think it's warranted:
> 
> This is a total dick move.  Don't do this.  You'll make yourself a lot
> of enemies, and if you pick the wrong real names and emails, some of
> those people are pretty damn good at figuring out what's going on.
> 
> Don't put real names and emails belonging to other people on your cert.
> It's *rude*.  If someone goes looking for "Robert J. Hansen
> " I want them to see one cert is newest and I want
> them to use that one.  If you go about putting my name and email address
> on your cert, I'm going to get cross.
> 
> Again: this is a total dick move.  Don't do this.

That said, it raises the interesting question of revocation of data on
keyservers (and the associated legal issues in operating keyservers, as
the operator is supposed to comply with requests to remove
personally-identifiable information from it).

I was just thinking, would it be possible to have a tag (a UID with
special meaning, like “please-remove...@srs-keyservers.net”?) for which
the signature would be verified by the keyserver, and that would cause
it to drop everything from its storage apart from this tag? This way the
“please remove me” tag would just naturally propagate across keyservers,
and all up-to-date-enough keyservers will drop all the data associated
with the key except the tag and the master public key (basically, the
strict minimum to check the said tag).

That said I guess ideas like this have already likely been discussed before?

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: FAQ and GNU

2017-10-10 Thread Leo Gaspard
On 10/10/2017 08:23 PM, Daniel Kahn Gillmor wrote:
> On Tue 2017-10-10 19:46:28 +0200, Leo Gaspard wrote:
>> That said, I wonder whether the sentence with “all GNU/Linux distros
>> feature a suitable GnuPG tool” would make sense at all, given GnuPG is,
>> as pointed out by Mike, part of the GNU operating system, which would,
>> if I understand correctly, mean that as soon as the distribution
>> includes GNU it must include GnuPG? (I may easily be wrong in my
>> interpretation of “part of the GNU operating system”)
> 
> There's no "must" that a GNU system contain GnuPG.
> 
> [...]
> 
> So I think this FAQ is more correct if it's re-written to say
> "GNU/Linux" here and in the other place i mentioned.

Agreeing here.



signature.asc
Description: OpenPGP digital signature
___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: FAQ and GNU

2017-10-10 Thread Leo Gaspard
On 10/10/2017 06:45 PM, Daniel Kahn Gillmor wrote:> (where is the FAQ
maintained, btw?  how is one expected to submit
> patches?)

I based my quotes on https://dev.gnupg.org/source/gnupg-doc.git ,
directory web/faq, running `git grep Linux`.

> I suspect that many minimal Linux-based operating systems (particularly
> one that uses sbase instead of the GNU userland) will *not* feature a
> suitable GnuPG tool.  So the statement above is probably more accurate
> if you change it to GNU/Linux.
> 
> Do you have a list of sbase+Linux distros that we can look at for
> comparison?

Hmm, I was thinking sta.li would have gnupg, but it looks like it
doesn't come embedded. Thanks for noticing!

I would thus like to withdraw this statement, as well as the other one
you pointed out.

That said, I wonder whether the sentence with “all GNU/Linux distros
feature a suitable GnuPG tool” would make sense at all, given GnuPG is,
as pointed out by Mike, part of the GNU operating system, which would,
if I understand correctly, mean that as soon as the distribution
includes GNU it must include GnuPG? (I may easily be wrong in my
interpretation of “part of the GNU operating system”) If I'm correct and
this would be a pleonasm, then maybe replacing it with “most Linux
distros feature a suitable GnuPG tool, with the notable exception of
Android” would make more sense? Then again maybe GNU/Linux would be both
more precise and simpler indeed, despite the pleonasm.

Thanks for the comment!
Leo



signature.asc
Description: OpenPGP digital signature
___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: FAQ and GNU

2017-10-10 Thread Leo Gaspard
On 10/10/2017 03:13 PM, Mike Gerwitz wrote:
> On Mon, Oct 09, 2017 at 22:06:17 -0400, Robert J. Hansen wrote:
>> A request has been made that each instance of "Linux" in the FAQ be
>> replaced with "GNU/Linux".
> 
> GnuPG is part of the GNU operating system.  Anywhere "Linux" is used to
> describe the GNU/Linux operating system, "GNU/Linux" should be used.

The occurences of “Linux” in the FAQ are in the following sentences,
according to a `git grep` in the FAQ directory:

> Except for a slight wording change, this DCO is identical to the one
used by the Linux kernel.

This sentence clearly deserves a Linux and not GNU/Linux... regardless
of whether GnuPG is part of the “GNU operating system” (sorry for the
quotes, it's the first time I hear this phrase) or not.

> - Linux is a trademark of Linus Torvalds.

Clearly Linux and not GNU/Linux once again.

> (all Linux distros feature a suitable GnuPG tool)

Do we really want to exclude distros based on the Linux kernel but not
on the GNU base utilities, but rather on eg. sbase [1]? I'd say there is
no compelling reason to, so no reason to switch to GNU/Linux here.

> *** … for Linux?
>
> The bad news is there is no single, consistent way to install GnuPG on
> Linux systems.  The good news is that it’s usually installed by
> default, so nothing needs to be downloaded!

The same argument leads me to think there is no reason to switch to
GNU/Linux here again; distros without the GNU userspace don't have an
easier way to install than distros with the GNU userspace as far as I know.

>  … for Debian GNU/Linux or Ubuntu?

It's already GNU/Linux.

> ** … Linux or FreeBSD?
>
> [Follows a list of email clients compatible with non-{Windows,Mac}
> operating systems]

Do Thunderbird, Gnus, Mutt, Kontact, Evolution or Claws-Mail not work on
computers which would have swapped the GNU userland with eg. sbase? If
so, maybe it'd be good to add a note stating that it doesn't work
without GNU tools, but I don't see any reason to exclude
non-GNU-userspace-based Linux distribution from the list, especially
given how FreeBSD is included in there too.


Thus, I'm not in favor of any change to the current FAQ, to replace a
Linux by a GNU/Linux.

Cheers,
Leo


[1] https://core.suckless.org/sbase



signature.asc
Description: OpenPGP digital signature
___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: FAQ and GNU

2017-10-10 Thread Leo Gaspard
On 10/10/2017 05:55 PM, Mario Castelán Castro wrote:
> On 10/10/17 01:46, Robert J. Hansen wrote:
>> With respect to specific distros, we ought use the name the distro
>> prefers.  The Fedora Project releases Fedora, not Fedora GNU/Linux.  The
>> Debian guys release Debian GNU/Linux, not Debian Linux.  The people who
>> set up these distros have given their distros names, and it seems
>> appropriate to use the names properly.  It is as inappropriate to refer
>> to Debian Linux as it is to refer to Fedora GNU/Linux: in both cases
>> that's rejecting the community's right to name their distro what they wish.
> 
> To me it appears hypocritical that you are speaking of “respecting
> community rights” where the aforesaid communities (more precisely, the
> founding developers who are the ones that actually choose the name of
> the distribution, not the later community) have stepped over the right
> of the GNU project to be given proper credit.
> 
> Recall that the most important contribution of the GNU project is not
> the software packages, but starting the free software movement and
> developing the most important licenses. GNU/Linux distributions are only
> possible because of free software ideology, even though many such would
> hate to acknowledge this.

So we should call FreeBSD “GNU/FreeBSD” instead? Sorry, I could not resist.



signature.asc
Description: OpenPGP digital signature
___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: [Feature Request] Multiple level subkey

2017-09-10 Thread Leo Gaspard
(you forgot to Cc: the list, I'm Cc-ing back as it doesn't seem
voluntary to me)

On 09/10/2017 07:50 PM, lesto fante wrote:
>> Besides, there is no
> need to give the same masterkey to your bank and your smart fridge, as
> they will (likely?) not participate in the Web of Trust anyway
> 
> not the same masterkey, but the same identity. Very important for
> changing the key transparently.

By “masterkey” I meant “most privileged key” (that is what you call
“identity key”). I'll try to use your terminology henceforth.

> You are right, I don't need the same identity for the fridge and the
> bank.. until I want the fridge to buy the milk.
> Or, for a more realistic example, i want my bank key and amazon key to
> be different, but to point to the same identity.. make more sense now?
> Yes, i could do it with the current master key and sub-key, but with
> the lack of a "master-master" key, a compromised master key would be a
> hassle to manage (that remember, is on the user device.. probably the
> smartphone. not exactly the safest device, and something quite easy to
> lose or get stolen).

I still don't get why you would want amazon and your bank to see the
same identity key. The only point of an identity key is to accumulate
signatures from the WoT, here there is no need of WoT and you could just
say amazon to connect you with one key and to pay with [some private key
you gave the corresponding public key to your bank].

>> Then, the company should not run a keyserver, but just sign the keys of
> all workers, and check the signature is valid and not revoked on use.
> 
> if you sign the identity of the worker, how do you revoke your sign?

With OpenPGP's revocation signature, that GnuPG gives an “easy”
interface for?

> AFAIK you should create a certificate for each worker and then sign
> it, and use that certificate to sign the worker identity, so you can
> revoke the worker certificate. That, to me, seems a bit more complex,
> but on the bright side can be implemented right now.

No, currently you'd just sign the worker's masterkey with the company's
masterkey, and when the worker no longer works here you'd revoke the
previous signature.

> A company may want to run their own server maybe because they don't
> want to expose all the certificate for all their worker. To me make a
> lot of sense, especially for sector like security or social care.

Indeed they may want to, but it is not (or maybe “should not”?) be a
critical piece of the infrastructure: current keyserver software is most
likely not as secure as the cryptography underlying signatures is.

>> Do you mean not exposing your public identity key or your private
> identity key?
> 
> private identity key. Your public identity is well known, the private
> key should be the most safe thing you have. Losing it is like losing
> your documents.
> Well, actually my end goal is to REPLACE documents (like passport)
> with this system. This also help you to understand why is so important
> to protect the identity, but still have a way to be able to issue it
> to minor services for everyday usage.

Well, you should never expose your private identity key to anyone, or
any other private key linked to you for that matter.

To take back your example with amazon and your bank, there is absolutely
no need that the private key you give amazon is linked to your identity,
it just need correspond to the public key you gave your bank. You could
even not use public-key crypto, and only give the same 128-bit token to
both sides, and it should be enough (though your bank may insist on
public-key cryptography so that they can have a proof that such key
issued such payment order).

If you don't want to access your bank's website, you could just give a
128-bit token signed with your signing key or whatever to amazon
(disclaimer: I'm writing this without thinking about all the security
implications right now, just throwing an idea out in the air).

>> If you mean private identity key, then there is already no need to
> expose any private part of any key when signing
> 
> you dont expose it *mathematically*, bu you expose it to the outside
> world, where you can lose it, got it stolen.. and then all your
> identity and related is compromised, and you have no easy or automated
> way to get back the ownership.
> Losing the SIGN key is scary, but as soon as you get home (or you
> alert your "trust" person, that just have the revoke key for the SIGN
> key, so it does not need to be *that* trusty), you will have your
> master key revoked, and as soon as you create a new SIGN key, you will
> *automatically* regain access to all services.

I think I no longer get what you call masterkey.

>> I think I sort of get what you are doing, but don't really get the
> advantage compared to things already possible with OpenPGP (with C
> subkeys), that is:
> 
> this is interesting,  i cant find much on how certification key work;
> but if they can be used to create and revoke other key in the 

Re: [Feature Request] Multiple level subkey

2017-09-10 Thread Leo Gaspard
On 09/10/2017 06:36 PM, lesto fante wrote:
> I am a bit confused by your "C key" terminology, i assume you are
> referring to what i call "master key", or level 2 key, that now I want
> to call SIGN KEY.

Oh yes sorry, I forgot to explain my terminology.

> Lets all agree on the terminology please. I propose this:
> 
> level 1: IDENTITY key - keep super safe. Paranoid level safe.
> 
> level 2: SIGN key - keep password protected on you main devices. Normal
> user level safe
> 
> level 3: APPLICATION KEY - can be clear and shared between multiple
> device. Quite unsafe, but convenient; completely transparent login! And
> still way safer than the classic "cookie to remember my login" approach

This is the terminology that would be used under your proposal, do I
understand correctly?

What I called C subkeys is based on the terminology for the three major
operations possible with keys under OpenPGP: Certify, Sign and Encrypt
(I seem to remember Authenticate also exists, although I never used it
myself).

Certify usually means “assert something about the owner of some other
key,” Sign means “assert I have written this content,” and Encrypt means
“make this content only accessible by someone.”

OpenPGP already has Sign and Encrypt (S and E) subkeys, but currently,
as far as I can remember, Certify (C) subkeys are hardly supported.

> Also i don't know what rfc4880bis ML is? is that some proposal somehow
> similar?

RFC4880bis ML is the place where the evolutions to RFC4880 (the RFC
describing OpenPGP) are usually discussed, although here is as good a
place as there for a first draft.

The “C subkeys” proposal would be to allow people to have a subkey with
the Certification capability (that is, allowed to perform this operation
on behalf of the masterkey).

Then, the proposal is close to what you proposed, except there is no
hierarchy of keys: you just have a masterkey, and S, E and C subkeys
directly signed by the masterkey. All these subkeys, when put together,
have all the power the masterkey has -- except the masterkey can revoke
them and issue new ones.

> Back to your email: You totally get it right!
> 
> Make the system CONVENIENT, keep your masterkey under you bed and forget
> about it.
> if you use that system on you bank, the bank does NOT care what
> "application key" (well, read later) you are using, as long as it is not
> revoked, as it is all the same identity.
> 
> We SHOULD think a way to integrate some permission in the key itself,
> like the name of the service it is supposed to be used; if someone stole
> a APPLICATION key can't use it to login to a different service. So we
> can imagine a situation where you create a APPLICATION key with
> permission to you use your bank account for up to 50€/week (of course,
> the limit for key is something implemented by the bank), and then give
> this key to you smart-fridge. Your fridge will not be able to sniff your
> facebook account, read your email or drain your ban account; and if
> something goes wrong, you KNOW what device is compromised. This could be
> as simple as the service giving you a ID to add somewhere IN the key,
> similar to how name and email can be set for a certificate right now. A
> bit more complex would be to avoid duplicate ID.

I fear you are going a bit too far in the future. Besides, there is no
need to give the same masterkey to your bank and your smart fridge, as
they will (likely?) not participate in the Web of Trust anyway.

> This permission thing could be also extended to SIGN KEY, eventually,
> but then I think could be just complexity without really added benefit,
> as in my idea normally there is only one master key. But that can be
> changed, just i have no idea of the categories to create.
> 
> This make SUPER convenient to revoke one or all you SIGN KEY and/or
> APPLICATION KEY without need for ANY other change; the reissuing process
> for the new key can be totally automated. Again I'm NOT taking into
> consideration how you share APPLICATION and SIGN key between your
> devices, that is a problem for another day discussion (would be
> extremely nice to have a standard way for any DEVICE to request a
> application key with SUGGESTED permission to the main device, but we
> have already too much on the fire right now)
> 
> What we NEED is a clear standard to publish public key and revoke, and
> we could simply use the existing key server. Think about a company, with
> is own key server that identify its worker, customer and such; you use
> you smartphone to clock-in and out your work, to start your (encrypted)
> work computer, sign and, encrypt and decrypt message, and be a step
> safer from scammer and social engineering.

Hmmm... The keyservers also exist and work quite well for public key
publishing and revocation, so I don't get why we would need something
else? It's the de-facto standard.

Then, the company should not run a keyserver, but just sign the keys of
all workers, and check the signature is valid and not 

Re: [Feature Request] Multiple level subkey

2017-09-10 Thread Leo Gaspard
On 09/10/2017 04:36 PM, Daniel Kahn Gillmor wrote:>> My user case is
simple; maintain my identity even if my master key is
>> compromised. Tho achieve that, I think about a multilevel subkey
>> system.
> 
> I'm not sure how the proposed multi-level system is an improvement over
> an offline primary key.  It's certainly more complicated, but complexity
> is a bug, not a feature.  can you explain why you think it's better?
> 
> with an offline primary key, you only put subkeys on any device that's
> used regularly.

I can think of at least one use case it covers in addition to an offline
masterkey (but that would also be covered by C subkeys): the ability to
sign others’ keys without using your masterkey. This would allow to not
have to expose the keysigning device to untrusted data/software/hardware
that would carry the to-be-signed key.

It would also make an offline masterkey much more convenient to use,
given one could just do like it never existed (even for keysigning),
except once the subkeys become compromised -- and at that time, the
recovery operation would be 1/ re-generate subkeys, 2/ re-sign all keys
you had signed with your previous C subkey.

What do you think about this? (maybe I should just raise the issue on
rfc4880bis ML, but as the question arose here…)



signature.asc
Description: OpenPGP digital signature
___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Is it possible to certify (sign) a key using a subkey?

2017-08-18 Thread Leo Gaspard
On 08/18/2017 06:33 PM, Peter Lebbing wrote:>> In my own and other
people's keyrings and in key servers.
> 
> The impact of you doing this on your own seems vanishingly small. And
> the ratio of disk space used by a public keyring versus everything else
> that is commonly on a computer isn't different. If I were looking for
> optimizations, I'd turn to processing time of a public keyring, not its
> size.

Just for the record, there seem to me like there may be another reason
for separate subkeys for certification, namely the one of security of
the masterkey.

Having a C subkey would allow to keep the masterkey entirely isolated
and to only use a diode to export C subkeys to a “keysigning machine”,
that would not compromise the masterkey by its compromise. Then, in case
of compromise of the keysigning machine, it'd be possible to revoke the
C subkey and create another one, then re-sign all the previously signed
keys with this new C subkey, all without losing the signatures on the
masterkey.

This is quite different from “airgapped computers” that use USB drives
to transit to-be-signed keys, as the USB stack in itself (or the
filesystem, or gnupg's certification operation) could be compromised;
the most obvious attack scenario being one based on badusb-like
compromising the key's firmware to make it act like a keyboard typing
the commands required to exfiltrate the masterkey.

Then, it's quite sad if C subkeys aren't widely supported, but I guess
that's another issue (and maybe it should be clearly spelled out in the
RFC whether they must be supported? especially with rfc4880bis in the
works, now could be a good time to choose)



signature.asc
Description: OpenPGP digital signature
___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Should I be using gpg or gpg2?

2015-09-29 Thread Leo Gaspard
On 09/29/2015 06:04 PM, Robert J. Hansen wrote:
> But you never know when a George Dantzig will appear.  And that means I
> think your long-term confidence in RSA is misplaced.

Does that mean long-term confidence in elliptic curves would be better
placed?

Does ECC rely on a stronger mathematical basis, or is it just vulnerable
to another kind of George Dantzig?

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: GPG's vulnerability to quantum cryptography

2014-07-07 Thread Leo Gaspard
On Sun, Jul 06, 2014 at 12:21:13PM -0400, Robert J. Hansen wrote:
 On 7/6/2014 3:36 AM, The Fuzzy Whirlpool Thunderstorm wrote:
  Using GPG encryption is still good, although it's vulnerable to
  quantum cryptodecryption.
 
 In point of fact, we don't know this.
 
 Theoretically, science-fiction level breakthroughs in quantum
 computation would break RSA.  But the problem with theory is some of the
 things that theory permits turn out to be impossible in reality.  For
 instance, there's nothing in the laws of physics that prohibit things
 from having negative mass, but we've never encountered negative-mass
 material anywhere: not in the lab, not in the world, not in deep space,
 not anywhere.

Wasn't there an experiment running, one or two years ago, about trying to make
anti-electrons anti-gravitate? I don't remember of having read any result,
though...

 It's good to be skeptical of quantum computation.  It's interesting to
 read up on, but be immensely skeptical of all predictions.

Weren't you the one who preached to assume the worst? It seems rather reasonable
to assume that somewhere in the future quantum cryptography (or any other kind
or huge advance in science) will break whatever cipher we are currently using...
after all, vigenere-like ciphers are almost ridiculous nowadays, while they were
once state-of-the-art.

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Trust and distrust [was: Re: Google releases beta OpenPGP code]

2014-06-08 Thread Leo Gaspard
On Sun, Jun 08, 2014 at 01:13:27PM -0400, t...@piratemail.se wrote:
 And personally, I do not trust google. Enough said in that regard. ;-)

Sorry to hijack this topic, but... Why would you trust the OpenPGP.js
developers?

At least, you can hold google as accountable for their actions. You cannot for
them: perhaps they do not even physically exist, and are just nameholders for a
three-letter-agency project, willingly introducing backdoors in this project.
Maybe they just fixed the bugs you reported because it made them look less
conspicuous.

Maybe will bring us all very far away.

What's great about open source is that you do not at all have to trust the
maintainer of a project. You only have to trust the project -- and by this I
mean the fact that at least a developer will have noticed the flaw. I may even
distrust Werner, and yet use gpg -- if e.g. I trust another gnupg developer.

And even this trust is not strictly required: you can always inspect the source
code all by yourself.

Sure, this model of trust the community is far from perfect, heartbleed being
the latest proof of that. But it is better than trust the maintainer, who is
always part of the community.

And what's great about google's project is that they are quite likely to be
highly audited: if anyone found a willingly placed security flaw in google's
end-to-end library, it would mean a lot of prestige.

So, even if I trusted google less than OpenPGP.js developers [and who tells us
these developers are not disguised google agents?], I would likely, after a
period during which security experts will have had their time with this new
library, trust it more than OpenPGP.js.

Despite the fact that it might have a backdoor while the other does not. Because
the opposite is even more likely.

Cheers,

Leo

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: GPG's vulnerability to brute force

2014-05-25 Thread Leo Gaspard
On Sat, May 17, 2014 at 10:51:40AM +0200, Peter Lebbing wrote:
 You can't object to scientific theories on the basis that you did not
 study them properly. It might have a bit of a Socratic feel to it, but
 it quite falls short of the real thing.

Just for the record: I do not feel like I ever objected to a scientific theory
on the basis I did not study it properly. I merely object*ed* to Robert's
interpretation of them, stating that my objections might be invalid due to my
incomplete study of the underlying theories (which turned out to be the case).

Thanks for the discussion,

Leo

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: GPG's vulnerability to brute force [WAS: Re: GPG's vulnerability to quantum cryptography]

2014-05-16 Thread Leo Gaspard
First: I agree with everything skipped in the quotes.

On Wed, May 14, 2014 at 07:31:26PM -0400, Robert J. Hansen wrote:
 On 5/14/2014 6:11 PM, Leo Gaspard wrote:
  BTW: AFAICT, a nuclear warhead (depending on the warhead, ofc.) does 
  not release so much energy, it just releases it in a deadly way.
 
 A one-megaton nuke releases a *petajoule* of energy.  That's a lot.
 When people start using the phrase peta- to describe things, I
 suddenly become very interested in their Health  Safety compliance.
 This is a petawatt laser.  This is a petawatt reactor.  This is a
 petajoule of energy.  This is Peta Wilson.[1]

Well... A nuclear reactor produces 1GW, and thus produces 1PJ in 10^6 s, that is
approx. 11 days 14 hrs. Sure, you may be very interested in Health  Safety
compliance of nuclear reactors, but...

  * You state the energy would be released (or did I misunderstand?). 
  Wikipedia states it is a minimum possible amount of energy required 
  to change one bit of information So no ecological catastrophe (not 
  counting nuclear waste, CO2, etc)
 
 You're beginning to make me a little irate here: the Wikipedia page
 answers this in the second sentence of its first paragraph.  Any
 logically irreversible manipulation of information ... must be
 accompanied by a corresponding entropy increase.
 
 Key phrase: Entropy increase.
 
 Layman's translation: Heat increase.
 
 The Landauer Bound gives not just a minimum amount of energy necessary
 to change a bit of information, but how much heat must be liberated by
 that computation.  And I repeat, this is in the second sentence of the
 first paragraph of the Wikipedia article...

Well... Currently, at a French equivalent of undergrad level (CPGE), we're
learning entropy is a theoretical quantity, that has no real-world meaning --
thus not creating heat. Actually, its unit (J.K^{-1}) does seem to validate this
interpretation: contrarily to e.g. enthalpy, it's not an energy. Perhaps are we
oversimplifying, or perhaps did I completely misunderstand the teachers, but if
this is true there is no heat release. OTOH there would be heat absorption
through the need to move the entropy out of the system -- provided AES is not
reversible (see below for my case against that point).

  information on each flipped bit. Actually, IIUC, flipping a bit is a
   reversible operation, and so the landauer principle does not apply.
 
 Look!  A bit of information:  ___
 
 That's what it was before.  Of course, it's now carrying the value '1'.
 So, tell me: you say bit flips are reversible, so what was the value
 before it was 1?  I promise, I generated these two bits with a fair coin
 (heads = 0, tails = 1).

Well... If the operation the bit just underwent was a bitflip (and, knowing the
bruteforcing circuit, it's possible to know that), the bit was a '0'.

I believe I must have misunderstood your challenge! (Or, just coming to my mind:
maybe was I unclear: when saying bitflip I did not mean setting a bit, but
rather setting its value as 1 - old value.)

 Reversible means we can recover previous state without guessing.
 Current computing systems are not reversible.

I do not state that physically our processors are reversible. I do not even
state any processors might ever be, or adiabatic computers might ever exist.

I just state the theoretical application going from the set of 128-bit keys to
the set of 128-bit cleartexts (with the 128-bit ciphertext fixed) is a bijection
(or so I hope -- unless many keys produce the same ciphertext from the same
cleartext, which would be an attack on AES and ease bruteforce naturally).

As a consequence, I cannot see where a bit of information was lost, and thus
where Landauer's bound is supposed to apply. But maybe am I the one lost here!

Thanks for your previous and hopefully future answers,

Leo

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


GPG's vulnerability to brute force [WAS: Re: GPG's vulnerability to quantum cryptography]

2014-05-14 Thread Leo Gaspard
On Wed, May 14, 2014 at 12:21:36PM -0400, Robert J. Hansen wrote:
  Since the well known agency from Baltimore uses its influence to have
  crypto standards coast close to the limit of the brute-forceable, 128
  bit AES will be insecure not too far in the future.
 
 No.
 
 https://www.gnupg.org/faq/gnupg-faq.html#brute_force

I unfortunately have to object to this FAQ article. (Please note I'm not using
any information beyond what Wikipedia provides -- and I may be wrong in my
undertanding of it.)

First, the Margolus-Levitin limit: 6.10^33 ops.J^{-1}.s^{-1} maximum
So, dividing the 2^128 by 6.10^33 gives me a bit less than 57000 J.s (assuming
testing an AES key is a single operation). So, that's less than 1min for 1kJ.
Pretty affordable, I believe.

Then, Landauer's principle: energy k T ln 2.
Again, assuming testing an AES key is a single bit flip, as k is approx.
10^{-23}, this gives an overall energy (per kelvin) of
2^128 . 10^{-23} . ln 2 J.K^{-1}, which is approx. equal to 10^16 J.K^{-1}
(overestimated, as k was underestimated).
According to Wikipedia still, the lowest temperature recorded on Earth is
10^{-10} K.
This makes for a total of 10^6 J, if the computation is done at that
temperature.
According to http://hypertextbook.com/facts/2009/VickieWu.shtml ; the human body
uses approx. 6MJ (ie. 6 . 10^6 J) per day.
As a consequence, the process would consume less than a day of a human body.

Granted, this is still far from possible : Here I assumed testing an AES key was
a single bit flip, and that the computation was entirely done at the coldest
temperature ever recorded in a laboratory. Anyway, the former is a not-so-huge
constant (ie. less than 10^5, I'm almost sure of that), and multiplying the
results by this constant still yields an imaginably possible lower bound. And
the latter already has been recorded, despite my believing no computation has
been done at that temperature, it is still possible in a foreseeable future.

So, despite bruteforcing being obviously impossible in this day and age, and
most likely impossible in the near future, it seems to me that the following
statement is exaggerated: The results are profoundly silly: it’s enough to boil
the oceans and leave the planet as a charred, smoking ruin.

The impossibility of bruteforce, to me, lies with current physical computation
capabilities, more than with theoretical lower bounds, that are far below
current prowesses.

Hoping I didn't miscompute,

Leo

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: PGP/GPG does not work easily with web-mail.

2014-04-09 Thread Leo Gaspard
On Wed, Apr 09, 2014 at 11:37:52PM +0100, One Jsim wrote:
 PGP/GPG does not work easily with web-mail.
 
 Most email, today, is read and write using the browser
 
 POP ou IMAP mail is a rarity
 
 That is the problem
 
 Some text/link in this problem?
 
 José Simões

Well... I started to write a firefox addon, but never had enough time to finish
it. Perhaps later. If anyone wishes to get what I've done (that is, a js-ctype
binding of gpgme, along with tests AFAICR), I can try to locate the source code!

However, a major issue remains the encryption of HTML documents, which is,
AFAICT, not possible today (well, not automatically at least, as of course gpg
can be used to sign html files); and besides not obviously secure: what about
white-on-white text and such? I don't doubt there are fixes for such, and most
isn't even an issue; I just remember enigmail forbids it, so I guess there are
reasons.

Sorry for not helping you more,

Leo

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Using an RSA GnuPG key for RSA ?

2014-04-04 Thread Leo Gaspard
On Thu, Apr 03, 2014 at 09:56:18AM -0400, ved...@nym.hush.com wrote:
 On Wednesday, April 02, 2014 at 5:41 PM, Leo Gaspard ekl...@gmail.com 
 wrote:
 
 If you are not to use the key in gnupg, why make gnupg generate it 
 in the first
 place? Why not use the program with which you'll use the key to 
 generate it? 
 
 =
 
 Where in the post did you get the idea that I would not?
 
 I trust GnuPG's generation of keys, but prefer not to trust closed source 
 programs generating RSA keys.
 I would like to use my GnuPG RSA key, easily available on keyservers, for 
 other RSA functions.
 
 
 vedaal

(As you didn't answer to list, I'm not cutting. Hope you didn't mean it to be a
private message, but it clearly didn't seem like one.)

Well... I inferred it from use it (not in GnuPG, but in other systems using RSA
keys), from your first message.

Anyway, as Sam puts it, you'd be better not putting your RSA key everywhere.

And... You say you do not trust closed source programs for key generation, but
does that mean you trust them for key usage? Otherwise, you could just as well
throw your key to the dustbin.

What I could propose would be to :
 * Make a gpg key, master key, airgapped, etc.
 * On each system on which you mean to use cryptography, generate a keypair
   using the program with which you are going to use it (or possible openssl, if
   the program does not generate keys).
 * Sign the public key of each keypair with your gpg key. As it is not a stricto
   sensu pgp key, sign the armored key as a plaintext message, if possible with
   a preceding comment explaining what it is to be used for.
 * Publish these signatures somewhere easily found.
 * If you want so, encrypt the private key with your mainkey and store it
   somewhere safe enough (it's encrypted, after all).

This way, each keypair gets the maximum security it can have : the security of
the application using the private keypart. (Actually, if you choose to keep an
encrypted backup, you also need to keep the mainkey safe, but that's supposed as
being the most protected part of the whole setup, so...)

What do you think about it?

Leo

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Using an RSA GnuPG key for RSA ?

2014-04-04 Thread Leo Gaspard
On Fri, Apr 04, 2014 at 01:32:47PM -0400, ved...@nym.hush.com wrote:
 I trust them to encrypt to my public key, and was planning to work out
 a system where I could decrypt on my own without it going through
 them.
 (they could have my public key, and verify my RSA signature).
 
 [All this is in the theoretical planning stage ;-)  
 first I would need to be able to isolate my RSA part of my GnuPG key
 and see if it can be used with an open source simple RSA program
 offline.
 
 That was my original question.]
 vedaal

Well... As this seems not documented (otherwise I guess someone else would have
answered you), I'm going to assume there is no such function available in gnupg.

So, this (and the reasons explained by Sam) explains the reason why I'm trying
to figure out what you actually want to do, in order to perhaps propose you
another solution, instead of merely answering you to write your own extractor.

So, if you forgive my bluntness... With what closed program are you trying to
interface? Why would you want to use your pgp keypair for this program, and not
a key generated for this use?

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Using an RSA GnuPG key for RSA ?

2014-04-02 Thread Leo Gaspard
On Wed, Apr 02, 2014 at 01:55:21PM -0400, ved...@nym.hush.com wrote:
 Is it possible to generate an RSA key in GnuPG, and then use it (not in 
 GnuPG, but in other systems using RSA keys), to encrypt and decrypt RSA 
 messages?
 
 If so, what portion of the GnuPG generated RSA key functions as a 'pure' RSA 
 key?
 (Is it isolatable by using --list-packets on the key?)
 
 TIA,
 
 vedaal

If you are not to use the key in gnupg, why make gnupg generate it in the first
place? Why not use the program with which you'll use the key to generate it? Or,
if the program does not offer this functionality, why not use openssl, which
provides this capability on purpose?

Were you to use the key both for gnupg and other systems, I would understand,
but doing things this way...?

Cheers,

Leo

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: MUA automatically signs keys?

2014-01-30 Thread Leo Gaspard
On Thu, Jan 30, 2014 at 09:09:45PM +, MFPA wrote:
  The advantage you have here though is the web of trust.
  1 level 1 signature would probably be not enough, but
  5, 10, 100..?
 
 If the signatures are made automatically be email software without
 verifying identity, where is the web of trust? Lots of such signatures
 would tie the key to the email address but not to a person. Email
 addresses, just like phone numbers, may be re-used by a different
 person today to who used them last year.

Well... To this at least I can answer. Sure, it links a key to an email address.
Yet, more often than not one knows the email address of the intended recipient
(otherwise, how would he/she send the email?). So knowing an email address is
associated to a key can be useful.

About emails reused by different persons... AFAICT most major email services
never re-issue the same email address twice. Which could be considered good
practice. If one worries about an email agency stealing the email addresses,
well... A signature on an email UID means Yes, this key is used by the same
person as the email address. So signing it automatically would not conflict
with the meaning of the signature. Yet if the UID also includes a name, then it
should be signed only after appropriate verification of the owner.

Just my two cents,

Leo

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Non email addresses in UID

2014-01-28 Thread Leo Gaspard
On Fri, Jan 24, 2014 at 11:08:16PM +, Steve Jones wrote:
 [...]
 
 Finally there's the possibility of explicit verification, if someone
 sends me a challenge and I publish that challenge's signature on my
 blog then that verifies that I am in control of that private key and
 can publish to that blog.
 
 [...]

Wouldn't it be better to publish unencrypted (and unsigned) a challenge received
encrypted? As signing unknown data should be avoided, as noone knows whether
this data won't ever have a real meaning one does not intend to mean.

Hope this message is not syntactically flawed to the point of being meaningless,

Leo

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Revocation certificates [was: time delay unlock private key.]

2014-01-24 Thread Leo Gaspard
On Thu, Jan 23, 2014 at 04:38:19PM -0800, Robert J. Hansen wrote:
 Well... I don't know how you type
 
 With a nine-volt battery, a paperclip, and a USB cable that has only one end
 -- the other is bare wires.  You wouldn't believe how difficult it is to do
 the initial handshake, but once you've got it down you can easily tap out
 oh, three or four words a minute.  For speed, nothing else comes close.
 
 My father gets on my case for using the nine-volt battery.  In his day, they
 had a potato and a couple of wire leads plunged into it.  But really,
 technology marches on and we should all embrace battery technology.

Great laugh!

(of course, I meant how fast)

 passphrase would really have to try hard to guess what passphrase I am using.
 And even more to remember a seven-word sentence seen once.
 
 You are not the typical use case.  No one person is a typical use case.

Well... You are right, of course. Yet this does not answer my second point: if
the spouse is spying on you to get your passphrase and remember it, then love is
already gone, and you are being subject to the usual hooker attack.

Yet I do see your point for revocation certificates here, I think.

Oh, just found another one in favor of revocation certificates: they can be
easily sent to keyservers from cybercafes without any special software
installed.

Thanks and cheers,

Leo

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Revocation certificates

2014-01-24 Thread Leo Gaspard
On Fri, Jan 24, 2014 at 07:47:15AM +0100, Werner Koch wrote:
 [...]
 
  the usefulness of revocation certificate, just the advice always popping 
  out to
  generate a revocation certificate in any case, without thinking of whether 
  it
  would be useful.
 
 Okay, that is a different thing.  I plan to change that with a notice
 saying which file has the edited revocation certificate.

OK, thanks! (for the remainder of the message as well, just have nothing to
answer)

Guess I got my answer, with every message combined.

Thanks all!

Leo

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Revocation certificates [was: time delay unlock private key.]

2014-01-23 Thread Leo Gaspard
On Thu, Jan 23, 2014 at 05:53:57PM +, nb.linux wrote:
 Hi Uwe,
 
 Johannes Zarl:
  So in short:
   - a delay won't help you
   - protect your private key so this won't happen
   - always use a strong passphrase
 and in addition: if you fear (or know) that your secret key was copied
 from your system, revoke it!
 To me, this is a very important feature of OpenPGP: _you_ can actually
 do something to reduce (not more, but also not less!) harm for yourself
 and others.
 And, you can be prepared for such an event (i.e. having created the
 revocation certificates in advance, stored them in a save but accessible
 place, printed out on paper,...).

Actually, this is something I never understood. Why should people create a
revocation certificate and store it in a safe place, instead of backing up the
main key?

So long as the primary key is encrypted and the passphrase is strong, this
should not lead to any security danger. (Anyway, it's stored in a safe place.
And a revocation certificate misused is dangerous too, as it ruins a web of
trust.)

And the advantages of doing so are that in case or accidental erasing of the
private key (who never accidentally deleted an important file?), it also allows
the main key to be retrieved.

The primary key allows one to create a revocation certificate, not the other way
around. So, why store a safe revocation certificate?

Leo

PS: Please, do not tell me one might have forgotten his passphrase. In this case
there is no harm in shredding the secret key and waiting for the expiration
date, answering persons emailing you encrypted files that you lost your
passphrase. Anyway, in this case, you're screwed, and a revocation certificate
would be no better -- unless it was stored unencrypted, at the risk of having it
used when you do not want it to be.

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Revocation certificates [was: time delay unlock private key.]

2014-01-23 Thread Leo Gaspard
On Thu, Jan 23, 2014 at 09:59:30PM +0100, Pete Stephenson wrote:
 [...]
 
 They would need to be trustworthy
 enough to not abuse the revocation certificate by revoking your
 certificate, but otherwise would not need to be given absolute trust
 that comes with having a copy of the private key. Same thing with
 keeping revocation certificates in a bank safe deposit box or some
 other location protected by a third-party -- if the box were
 compromised (say by the authorities with a court order), your private
 key would not be compromised.

Well, why not give them a copy of the encrypted key? So long as your passphrase
is strong enough (e.g. diceware for 128-bit passphrase), it should not require
to have absolute trust in the backup holder.

And if you want to account for risks of mental injury serious enough to make you
forget your passphrase, well... our threat models are not the same: in mine, my
key will expire soon enough in this case, and noone will ever be able to
compromise my secret key as noone on earth remembers the cryptographically
strong passphrase.

  PS: Please, do not tell me one might have forgotten his passphrase. In this 
  case
  there is no harm in shredding the secret key and waiting for the expiration
  date, answering persons emailing you encrypted files that you lost your
  passphrase. Anyway, in this case, you're screwed, and a revocation 
  certificate
  would be no better -- unless it was stored unencrypted, at the risk of 
  having it
  used when you do not want it to be.
 
 Actually, that's a fairly reasonable scenario. Someone may have
 forgotten their passphrase for whatever reason (ranging from
 forgetfulness to head trauma) and would like to inform others that
 their key is no longer usable. Replying to people saying that their
 key is lost is essentially indistinguishable from a man-in-the-middle
 attack -- some people might simply resend the encrypted message in
 plaintext, defeating the purpose of encryption in the first place.

As you put it, this is essentially indistinguishable from a man-in-the-middle
attack, so anyone who resend the encrypted message unencrypted is not respecting
the protocol (or believe it is not important enough to be encrypted -- let's
remember a lot of people just send christmas cards without envelopes).

If anything important has to be transmitted (and the sender refuses to send it
in cleartext), the sender will most likely send the message to someone else with
physical contact with the recipient.

One might argue this protocol is non-perfect, yet it is the best one the sender
could achieve, revocation certificate or not.

 A revocation certificate allows one to revoke the certificate even
 without access to the private key, so one could reply with their
 now-revoked public key (or direct someone to the revoked key on the
 keyserver) and say Sorry, I lost the private key and had to revoke
 it. -- while this doesn't resolve the issue of people resending
 things in plaintext, it does permit someone to actually show that
 their key is, in fact, revoked.

Indeed. Yet absence of answer should be clear enough to let the sender
understand his message did not come to the recipient. Sure, a MitM could block
the outgoing message, but anyway the sender has no better option than to find
another way of sending his message: the recipient clearly does not receive the
message.

In case the fact of knowing a key has been revoked is critical in time, well...
Knowledge of head traumas tend to expand quickly enough into the circles of
acquaintances. (Sure, forgetfulness is a risk. Yet, forgetting a passphrase you
type so often must be quite hard past the first few weeks.)

And, what's more, such a time-critical scenarios happen only with special needs,
which are AFAICT not usual.

 Also, not all keys have expiration dates. I, for one, tend not to set
 expiration dates on my primary keys, but instead rotate encryption and
 signing subkeys (which do have expiration dates) for day-to-day use.
 While I could put an expiration date on the primary and extend the
 date as needed, it's easier for me to just make revocation
 certificates and keep them stored off-site.

Well, you lose the dead-man-switch. BTW, once your key is revoked, will it some
day be cleared out of the keyservers, or will it stay there forever?
AFAIC guess, keys with an expiration date must be purged a little while after
they expired, as there is no point in keeping them, while revoked keys must be
kept, as anyone might need to update them to retrieve the revocation
certificate.

Of course, I'm only discussing the case of normal people. There must be plenty
of cases for which a revocation certificate is really useful, yet the number of
tutorials in which I read Store a backup of your private key and a revocation
certificate just looks like nonsense to me: if one stores both, it should at
least be precised it should be in two different locations, as storing the two
together is completely pointless, one 

Re: Revocation certificates

2014-01-23 Thread Leo Gaspard
On Thu, Jan 23, 2014 at 10:26:33PM +0100, Werner Koch wrote:
 On Thu, 23 Jan 2014 21:25, ekl...@gmail.com said:
 
  PS: Please, do not tell me one might have forgotten his passphrase. In this 
  case
  there is no harm in shredding the secret key and waiting for the expiration
 
 Experience has shown that this is the most common reason why there are
 so many secret keys on the servers which are useless.  Further, an
 expiration data is not set by default and waiting a year until the key
 expired is not a good option.

Oh? I thought the most common reason was test keys, and tutorials which explain
step-by-step how to make a keypair and push it on a keyserver, without telling
to put an expiration date. I, unfortunately, have myself put a few test keys on
the keyservers (whose passphrase I no longer have) without expiration date
before knowing they would never (?) be deleted, and am still remorseful about
it.

And keys with an expiration date are someday deleted, while keys, even revoked,
without are never, are they?

BTW, revocation certificates are not produced by default either. So, why not
advise people to put an expiration date, instead of counselling them to generate
a revocation certificate?

 Further, it is also common that a secret key is lost (disk crash - no
 backup, backup not readable or too old) or simply stolen.  This has the
 same effect as a forgotten passphrase.  In particular in the stolen key
 case, you want to immediately revoke it and not wait until you can
 restore the key from a backup stored at some safe place.

Well, my question is then: Why not restore the key immediately (having stored it
at the place you would have stored the revocation certificate), and revoke it
then?

 There are other rare scenarios, for example a high security key in a far
 away place, you are traveling and you want to immediately revoke the key
 for whatever reason.

These corner-case scenarios are ones I did not mean to discuss, sorry for not
having made them clear.

I'm also feeling I may have failed to make myself understood: I am not denying
the usefulness of revocation certificate, just the advice always popping out to
generate a revocation certificate in any case, without thinking of whether it
would be useful.

Cheers !

Leo

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Revocation certificates [was: time delay unlock private key.]

2014-01-23 Thread Leo Gaspard
On Thu, Jan 23, 2014 at 01:27:58PM -0800, Robert J. Hansen wrote:
 [...]
 
 And yes, a strong passphrase is still the strongest bar against these
 backups being misused -- but unless you've got an eye-poppingly strong
 passphrase, your best bet is to rely on denying attackers access to the data
 as well as the passphrase.
 
 [...]

Well... Diceware generates 128-bit passphrases of ten words, which is not *that*
much. Yet is can be regarded as far too much. Well... seven-word passphrase
provides 90-bit of security, and should not be so hard to remember. And
bruteforcing it should be quite long...

Sure, you would need to use really good random number generator, yet you could
use /dev/random just as well as you would have for your randomly-generated
passphrase.

Yet, I agree I would not send my encrypted private key. But having your divorced
spouse bruteforce 90 bit of passphrase just to annoy you... seems quite an
unreasonable threat to me. And AFAICT even well-funded-organizations are not yet
powerful enough to bruteforce a 90-bit passphrase with enough s2k iterations.

Cheers,

Leo

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Revocation certificates [was: time delay unlock private key.]

2014-01-23 Thread Leo Gaspard
On Thu, Jan 23, 2014 at 03:08:40PM -0800, Robert J. Hansen wrote:
 Yet, I agree I would not send my encrypted private key. But having your
 divorced
 spouse bruteforce 90 bit of passphrase just to annoy you... seems quite an
 unreasonable threat to me.
 
 It is.  That's why that's not the threat being defended against.
 
 The threat is against your spouse seeing you enter your passphrase.  It's
 very easy for roommates to discover each other's passwords and passphrases;
 sometimes it happens by accident.  Everyone knows not to enter a passphrase
 with a shoulder surfer around, but if you and your spouse are sitting on the
 couch with your laptops open and you receive an encrypted email, are you
 really going to tell her, Sorry, honey, I have to take this into the other
 room so I can enter my passphrase without worrying about you spotting it?
 
 So long as there's marital bliss, you're perfectly safe.  You just can't
 rely on that lasting forever.

Well... I don't know how you type, but someone looking at me who sees me type my
passphrase would really have to try hard to guess what passphrase I am using.
And even more to remember a seven-word sentence seen once.

BTW, I once had a fun experiment: just type an eight random chars password with
no protection at all, and asking people behind me to remember it. The password
was displayed as I typed it, and left approx. two seconds more. No one managed
to see it and remember it. A few days later, I conducted the same experiment
with the same people and the same password, and no password was successfully
guessed. Sure, the information gathered would be enough to bootstrap a
successful bruteforce, but the experiment was a lot more easy to complete than
peeping at and remembering a seven-word password.

So, if the spouse is doing it, then marital bliss has already come to an end,
and one should have noticed it.

Yet, being unmarried, I cannot say anything about such things.

So, within that threat model, revocation certificates are useful for sure.
Assuming one's spouse would first grab the secret key and remember the
passphrase before divorce.

Thanks for making that point!

Leo

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: sign encrypted emails

2014-01-05 Thread Leo Gaspard
On Sat, Jan 04, 2014 at 10:28:26PM +0100, Johannes Zarl wrote:
 On Saturday 04 January 2014 16:09:51 Leo Gaspard wrote:
  On Fri, Jan 03, 2014 at 07:31:29PM -0500, Daniel Kahn Gillmor wrote:
   In your example, the fact that a message was encrypted makes the
   recipient treat it as though the sender had indicated something specific
   about the message because it was encrypted.  This is bad policy, since
   there is no indication that the sender encrypted the message themselves,
   or even knew that the message was encrypted.
  
  Which is exactly the reason for which Hauke proposed to sign the encrypted
  message in addition to signing the cleartext message, is it not?
 
 Wouldn't one have to encrypt the signed-encrypted-signed message again to 
 prevent an attacker from stripping away the outer signature? What would the 
 recipient then do with the simple signed-encrypted message?

Well, the idea would be that the receiving program would check there *is* an
additional signature, and refuse it if not.

Nevertheless, adding a second layer of encryption would help, both in avoiding
this threat with less requirements on the receiving program, and in avoiding the
metadata-analysis and irrevocability threat. Less requirements, as the receiving
program merely has to run decrypt-and-check twice, not having to check it
actually has two levels of signature, as any absence of the second level would
be detected by a failed second check. Avoiding metadata analysis, as encrypting
the second signature forbids an attacker to grab a message and have an
undeniable proof that Alice sent an encrypted message to Bob, even without Bob's
help.

  Sure, there might be other ways: add a message stating to which key the
  message is encrypted, etc. But this one has the advantage of requiring
  AFAICT no alteration to the standard, and of being easily automated, for
  humans are quite poor at remembering to always state to which key they
  encrypt.
  
  Anyway, wouldn't you react differently depending on whether a message was
  encrypted to your offline key or unencrypted?
 
 One should certainly not act differently depending on the encryption of a 
 message. Maybe with the one exception of timeliness: If a message is 
 encrypted, you'll probably be ok with me reading the mail when I'm at my home 
 computer. If a message is encrypted to my offline key, you'll be prepared to 
 wait for a month or so (many people have their offline-key in a safe deposit 
 box).
 
 Of course this opens way to subtle timing attacks (delaying reading a message 
 until it is no longer relevant), but these subtle attacks can be done using 
 simpler means (holding the message in transit).

Well... I, personally, would attach more importance (no more validity, just
importance, like in listen to me very well or whatever english people say to
others to get them to listen carefully) to a message signed to an offline main
key that might wait for a month than to a message sent in cleartext. For I would
assume the sender designed his message to be important enough to make me move to
my safe deposit box so as to read it.

Of course, without encryption-checking, this assumption is wrong, and this is
emphasized in one of my previous messages on this thread, with the We got to
talk tomorrow taking importance for the receiver that is unexpected to the
sender, thus leading to a security flaw.

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: sign encrypted emails

2014-01-04 Thread Leo Gaspard
On Fri, Jan 03, 2014 at 07:31:29PM -0500, Daniel Kahn Gillmor wrote:
 On 01/03/2014 06:56 PM, Leo Gaspard wrote:
  On Fri, Jan 03, 2014 at 12:50:47PM -0500, Daniel Kahn Gillmor wrote:
  On 01/03/2014 08:12 AM, Leo Gaspard wrote:
  So changing the encryption could break an opsec.
 
  If someone's opsec is based on the question of whether a message was
  encrypted or not, then they've probably got their cart before their
  horse too.
 
  opsec requirements should indicate whether you encrypt, not the other
  way around.
  
  Well... So, where is the flow in my example? This example was designed so 
  that,
  depending on the level of encryption (and so the importance of the safety 
  of
  the message according to the sender), the message had different meanings.
 
 As you've noticed, the sender cannot verifiably communicate their intent
 by their choice of encryption key.  If the sender wants to communicate
 their intent in a way that the recipient can verify it, they'll need to
 sign something.
 
 In your example, the fact that a message was encrypted makes the
 recipient treat it as though the sender had indicated something specific
 about the message because it was encrypted.  This is bad policy, since
 there is no indication that the sender encrypted the message themselves,
 or even knew that the message was encrypted.

Which is exactly the reason for which Hauke proposed to sign the encrypted
message in addition to signing the cleartext message, is it not?

Sure, there might be other ways: add a message stating to which key the message
is encrypted, etc. But this one has the advantage of requiring AFAICT no
alteration to the standard, and of being easily automated, for humans are quite
poor at remembering to always state to which key they encrypt.

Anyway, wouldn't you react differently depending on whether a message was
encrypted to your offline key or unencrypted?

Cheers,

Leo

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: sign encrypted emails

2014-01-03 Thread Leo Gaspard
On Fri, Jan 03, 2014 at 06:21:05AM -0500, Robert J. Hansen wrote:
 On 1/3/2014 4:57 AM, Hauke Laging wrote:
  Would you explain how that shall be avoided?
 
 I already did, in quite clear language.
 
 You are trying to solve a social problem (people don't have the
 background to think formally about trust issues) via technological
 means (if we just change the way we sign...).

I think the need for such a fix could also be highlighted in the following
example.

I sign the message Got to talk tomorrow at dawn, then send it to Alice,
thinking about the cake for the birthday party, not important so not encrypting
it. Bob grabs the message, and sends it encrypted to Alice's highest security
key. Alice then thinks it is a really important message, and the matters to
discuss are really important. She takes with her the top secret files we are
working together on.  Bob, knowing the place and date of the meeting, then comes
and steals the top secret files.

So changing the encryption could break an opsec.

I'm not saying it would be useful everyday. But some use cases seem to require
it. However, I'm not saying this feature should be included by default, as a fix
would be easy (call gpg twice), and I can think of few use cases.

BTW, is a timestamp included in the signature? If not, it could lead to similar
issues.

Cheers,

Leo

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: sign encrypted emails

2014-01-03 Thread Leo Gaspard
On Fri, Jan 03, 2014 at 12:50:47PM -0500, Daniel Kahn Gillmor wrote:
 On 01/03/2014 08:12 AM, Leo Gaspard wrote:
  So changing the encryption could break an opsec.

 If someone's opsec is based on the question of whether a message was
 encrypted or not, then they've probably got their cart before their
 horse too.

 opsec requirements should indicate whether you encrypt, not the other
 way around.

Well... So, where is the flow in my example? This example was designed so that,
depending on the level of encryption (and so the importance of the safety of
the message according to the sender), the message had different meanings.

Sorry, I can't see yet where I went wrong.

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Sharing/Storing a private key

2013-12-14 Thread Leo Gaspard
On Fri, Dec 13, 2013 at 12:12:12PM +0100, Mindiell wrote:
 Hello,
 
 I'm using GPG regularly and did want to save my private key.
 
 [...]
 
 I found  (http://point-at-infinity.org//) too, but it wasn't
 really usable beacause it has too many limitations IMHO.
 
 So I did it myself : ShareIt (https://gitorious.org/shareit)

 [...]
 
 It is using the Shamir's Sharing Secret Algorithm
 (http://en.wikipedia.org/wiki/Shamir%27s_Secret_Sharing).
 
 [...]

 Any Comments and/or critics are more than welcome, especially on security
 issues.

AFAIK,  *is* an implementation of SSS. So, why would you write a new
version?

I must say I didn't look at the source, as I do not see the point at first.

So, this is a warning about security issues : something you made yourself is
likely to be unsafe. A tested implementation exists.

Maybe is there really a point in writing it, but I can't see which. Maybe if you
explained what the limitations of  are...?

HTH,

Leo

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Renewing expiring key - done correctly?

2013-12-04 Thread Leo Gaspard
On Tue, Dec 03, 2013 at 07:26:09PM -0500, Robert J. Hansen wrote:
 On 12/3/2013 6:59 PM, Hauke Laging wrote:
  It may be possible to prevent someone from seeing the revocation
  certificate. Certificate distribution is a lot less secure than the
  keys themselves. But you cannot trick someone into using an expired
  key.

 Of course you can.  Reset their computer's clock.  You don't even have
 to compromise their computer in order to do it: compromising whatever
 NTP server they're contacting is enough.

AFAIK by default ntpd dismisses changes to the RTC when NTP time is off more 
than
15 min of the RTC. One would need a special flag to force it to update the clock
in this case. (at least the ntpd I used)

So you could only delay the expiration date by 15 min... So useful ?

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: article about Air Gapped OpenPGP Key

2013-11-19 Thread Leo Gaspard
On Tue, Nov 19, 2013 at 09:06:18PM +0100, Johan Wevers wrote:
 On 19-11-2013 7:07, Robert J. Hansen wrote:
  Even then, scrubbing data is usually a sign you've misunderstood the
  problem you're trying to solve.  If you're concerned about sensitive
  data lurking on your hard drive the solution isn't to scrub the drive,
  it's to use an encrypted filesystem.
 
 That depends on your threat model. If you fear juridical problems (say,
 for example, some encrypted mails have been intercepted by the police
 but they can't decrypt them), destroying the key will prevent you from
 having to hand it over. In some jurisdictions this may be seen as
 contempt of court, and even be punishable, but in most EU countries
 you're safe when you do this.

Especially knowing in most EU countries judges are not allowed to force you to
hand over your secret key, only to decrypt specific messages for them. (Don't
remember where I read that.)

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: article about Air Gapped OpenPGP Key

2013-11-19 Thread Leo Gaspard
On Tue, Nov 19, 2013 at 02:50:20PM -0800, Robert J. Hansen wrote:
 That depends on your threat model. If you fear juridical problems (say,
 for example, some encrypted mails have been intercepted by the police
 but they can't decrypt them), destroying the key will prevent you from
 having to hand it over. In some jurisdictions this may be seen as
 contempt of court, and even be punishable, but in most EU countries
 you're safe when you do this.
 
 Especially knowing in most EU countries judges are not allowed to force
 you to
 hand over your secret key, only to decrypt specific messages for them. (Don't
 remember where I read that.)
 
 Most encrypted drive software doesn't actually work the way people seem to
 think they work.  The drive is encrypted with a random nonce.
 [...]

Actually, I answered the encrypted mails part. Thanks anyway.

 I cannot think of a single use case for scrubbing plaintext storage devices.
 In every use case I can come up with, the user would be better served by
 using an encrypted storage device.  That doesn't mean no such use case
 exists, mind you -- just that I can't think of one.

Well... I can see one : the user used a plaintext storage device without
thinking about it, and then understands he needs an encrypted device and scrubs
his hard drive when the encrypted drive is set up with the necessary
information.

Another one would be (paranoid) fear about the long long term : who knows some
three-letter agency would not steal your computer, and store its hard drive
content until decryption is available (say, 10 years from now, being quite
optimistic?). So scrubbing the already-encrypted data would help ensure data is
never recovered.

Maybe scrubbing a specific file, without need to reset files on full blocks,
block-based encryption being AFAICT the most frequent way of encrypting complete
hard drives.

That's all I can figure out.

Cheers,

Leo

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: trust your corporation for keyowner identification?

2013-11-07 Thread Leo Gaspard
On Thu, Nov 07, 2013 at 11:48:07AM +0100, Peter Lebbing wrote:
 On 06/11/13 23:28, Leo Gaspard wrote:
  But mostly because signing is an attestion of your belief someone is who 
  (s)he is. Thus, if you believe someone is who the UID states (s)he is as
  much as if you met him/her in person and followed the whole verification
  process, I would not mind your exporting signatures of the key.
 
 I get the feeling you're partly responding to my adamant statements earlier, 
 but
 you're confusing the situation I was responding to.

Well... The answer to your previous message was in my first two paragraphs. The
rest of my answer, to which you answered, was mostly thinking over some debate
that aroused earlier, and whose authors I do not remember. Anyway, I think you
answered the most important part of my last message.

 I think you're saying: Person X tells me their key is K1. I blindly trust 
 person
 X, and I know for a fact that person X was the one who told me K1 is his key.
 That is, you were in the same room, or you recognised their voice on the
 telephone, or something similar. This is acceptable to many people as a
 verification.
 
 But this is not the situation I was talking about. It's this:
 
 Person X (having key K1) has signed key K2, asserting that it is held by Y.
 Since you blindly trust X, you can assign him full (or hell, ultimate if you
 prefer) ownertrust, and key K2 is valid for you. You don't need to sign K2
 anymore, because it is already valid since you expressed your trust to GnuPG,
 and GnuPG uses it to validate that it belongs to Y.
 
 Now, what Stan Tobias appeared to want, is sign key K2 himself, probably to
 express to others in the Web of Trust that he believes K2 to be valid. But 
 this
 doesn't add any additional verification of key validity to the Web of Trust,
 it's noise. Because anyone else can look at the signature made by X, and 
 decide:
 I trust X fully as well. They assign full trust to X, and K2 becomes valid.

Except they do not have to know X, nor that he makes perfectly reasonable
decisions in signing keys.

And I believe it's not noise. Let's make an example in the real world :
 * I would entrust X with my life
 * X would entrust Y with his life, without my knowing it
 * Thus, if I actually entrusted X with my life, why should I be frightened if X
   asked Y to take care of me ? Provided, of course, X told me he was letting Y
   take care of me. After all, I would entrust X with my life, so I should just
   agree to any act he believes is good for me.
(That's what I called blind trust. Somewhat more than full trust, I believe.)

 Let's get back to ownertrust: in the Web of Trust, ownertrust is an expression
 of how well you think other people verify identities before they sign a key. 
 If
 you sign key K2 based on X's signature, you haven't verified Y's identity.
 You've probably verified X's identity, but not Y's. So you shouldn't sign K2.

So, is a signature a matter of belief in the validity of the key or of actual
work to verify the key ?

 You might believe Y when he or she walks up to you and says: my name is Y and 
 K2
 is my key. But that is not what happened; X said: K2 is Y's key. Y didn't say
 anything to you, let alone that you verified it was actually Y talking. That's
 the absolutely necessary part of verification: you believe that it was 
 actually
 Y that told you K2 is theirs. Just believing K2 is Y's key is not 
 verification;
 it's key validity.
 
 I'll give an example.
 
 In the Web of Trust, key validity is a thing that can gradually build up until
 it passes a certain point where we say: I have so much proof that it appears 
 to
 be valid, that I conclude it's, within reason, valid. This is why you have
 completes needed, marginals needed, and max cert depth. The latter says:
 once we pass a certain depth, my proof of identity becomes so indirect I don't
 wish to trust that information anymore. I will paint a picture with the 
 default
 settings, completes 1, marginals 3, max depth 5.

If I understood correctly, the depth parameter you are talking about is useless,
except in case there are trust signature. And you agreed with me for them to be
taken out of the equation.

 Suppose A has signed B. There are three people C, D and E, who have full trust
 in A. They do what I'm arguing against: they sign key B as well, based on 
 their
 trust of A.
 
 Now I come along. I actually have key A valid as well, but quite indirectly: 
 it
 is at level 4. I know A, but ownertrust is very personal. I think A does an 
 okay
 job of verifying identities, but not to the rigorous level I personally 
 demand.
 I work with pretty sensitive stuff, and my standards are high (I'm painting a
 picture here, not describing reality). So I assign him marginal ownertrust. 
 Now
 what I would expect, is that I need some more signatures, and B will become
 valid at level 5, the level where I have configured GnuPG to say: okay, this 
 is
 deep enough, I will not take

Re: trust your corporation for keyowner identification?

2013-11-07 Thread Leo Gaspard
On Thu, Nov 07, 2013 at 07:21:28PM +0100, Peter Lebbing wrote:
 On 2013-11-07 17:09, Leo Gaspard wrote:
 If I understood correctly, the depth parameter you are talking about
 is useless, except in case there are trust signature. And you agreed with
 me for
 them to be taken out of the equation.
 
 Of course it's not useless. You seem to misunderstand the Web of Trust.
 
 I'll give an example.
 
 I know and trust the people A, B, C, D and E. A has signed B, B has signed
 C, C has signed D, D has signed E, and E has signed F. I meet up with A,
 verify their identity, and sign their key. I assign ownertrust to A, B, C, D
 and E. Et voilà, the keys A, B, C, D and E are all valid, without me needing
 to meet up with my other friends to verify their key details. A is at level
 1, B at 2, C at 3, D at 4, and E at 5. Unfortunately, F won't get valid
 because it is at level 6.

Indeed, I never thought someone would assign ownertrust without verifying the
key. Please accept my apologies.

However, I still believe that, under the condition any ownertrusted key has been
verified (which, I assumed, was commonplace, but I was apparently wrong), the
depth parameter is useless.

 Now suppose C signs F as well. F is now at level 4, so it becomes valid.
 However, I don't trust F, so even if F now signs G, G won't become valid.
 
 Signatures indicate verification, not trust or belief. Trust is in your
 trust database or in trust signatures, but the latter are not commonly used.
 Belief is expressed in validity calculated from your trust database and
 signatures. I don't know if you can choose to disagree with GnuPG, that is,
 if you don't believe a key is valid even though GnuPG calculated that it is.

I'm sorry, I think I gave too much importance to your earlier statement
(Signing is to be an attestation to the validity of the key.), incorrectly
deducing from it that signatures indicates that you should sign whenever you
believe a key is correct as much as if you met in person

 I could get back to all the other points you raise, but I think it's a waste
 of time when you have reasoned from the standpoint that to get a key to be
 valid, you need to sign it, and that is how it looks to me.
 
 It's not much of a Web when you don't have any depth... it's more of two
 intertwined strands then ;).

I think this time, you gave too much importance to some of my sentences. Or
maybe was I too bad at making myself understood.

Anyway, I meant I should sign a key whenever I believe a key to be valid as much
as if I met with the keyowner. Which, of course, does not equates with merely
believing a key is valid. Indeed, on the WoT, one is rarely sure of the quality
of signatures. (Indeed, I believe(d) full ownertrust must be quite rare., for
that same reason ; but I am probably wrong.)

And, now I know assigning ownertrust to not-personnally-checked keys is
relatively common, I know I should not sign keys based on other people's
verification.

However, to come back to the initial problem, I still believe the key change
problem (ie. owner of K1 switchs to K2) does not require re-verifying ownership
etc. (BTW, isn't this also why transition statements, like
https://we.riseup.net/assets/77263/key%20transition were written ?)

But I still wonder how one should deal with key duplication (ie. owner of K1 now
has a second key K2)...

 HTH,
 
 Peter.
 
 PS: My ownertrust for E is useless for now, because he/she is at level 5.
 However, if I get a shorter path to him or her later, it will become useful
 then.

Anyway, thanks for you detailed explanations about the WoT !

Cheers,

Leo

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: trust your corporation for keyowner identification?

2013-11-07 Thread Leo Gaspard
On Thu, Nov 07, 2013 at 01:40:22PM -0500, Daniel Kahn Gillmor wrote:
 On 11/07/2013 11:09 AM, Leo Gaspard wrote:
 Except they do not have to know X, nor that he makes perfectly reasonable
 decisions in signing keys.
 
 And I believe it's not noise. Let's make an example in the real world :
   * I would entrust X with my life
   * X would entrust Y with his life, without my knowing it
   * Thus, if I actually entrusted X with my life, why should I be frightened 
  if X
 asked Y to take care of me ? Provided, of course, X told me he was 
  letting Y
 take care of me. After all, I would entrust X with my life, so I should 
  just
 agree to any act he believes is good for me.
 (That's what I called blind trust. Somewhat more than full trust, I believe.)
 
 if we're talking about gpg's concept of ownertrust, please do not muddy
 the waters with entrust X with my life?  gpg's ownertrust is much more
 narrow than that: it says I am willing to rely on OpenPGP certifications
 made by the holder of this key.
 
 entrust with my life is not simply a superset of all other trust.  I have
 friends who would take care of me if i was deathly ill.  I would place my
 life in their hands.  But they have never thought about how to do rigorous
 cryptographic identity certification, and I would not rely on their OpenPGP
 certifications.

Indeed, I thought of this case after having sent my email. Anyway, by blind
trust, I did mean a superset of all trusts related to keysigning.

 Let's get back to ownertrust: in the Web of Trust, ownertrust is an 
 expression
 of how well you think other people verify identities before they sign a 
 key. If
 you sign key K2 based on X's signature, you haven't verified Y's identity.
 You've probably verified X's identity, but not Y's. So you shouldn't sign 
 K2.
 
 So, is a signature a matter of belief in the validity of the key or of actual
 work to verify the key ?
 
 An OpenPGP certification says I believe that Key X belongs to the person
 identified by User ID U.  Most people would not want to make that statement
 publicly without having thought about it and convinced themselves somehow
 that it is true.  What it takes to convince each person may well vary, which
 is why we assign different ownertrust to different people.  When making a
 public assertion like an OpenPGP certification, it is also probably
 reasonable to ask what the parties involved (or the rest of the world) gains
 from making that statement. Just because you believe a statement to be true
 doesn't mean you need to make it publicly, with strong cryptographic
 assurances, and it may have bad consequences.
 
 Also, consider that certifications are not necessarily forever.   If Alice
 relies solely on Carol's certification to believe that key X belongs to Bob,
 and Alice then certifies (Bob,X), what does Alice do if Carol revokes her
 certification?  If Alice doesn't pay attention and revoke her own
 certification, then she is announcing as fact to the world something that
 she should no longer believe to be true (assuming that she was relying only
 on Carol's certification for that belief). This sounds like an untenable
 maintenance situation I personally would rather avoid, which is why i do not
 make public certifications based solely on other people's certifications.

Indeed. I just backed off in my answer to Peter, by understanding why it was not
needed. However, I believe that for the initial problem (ie. key change),
information provided by a signed message accompanied from a UID on the other key
is significant enough, and moreover definite, so I would not be bothered signing
such a new key (of course, also revoking the signature on the old key).

 If I understood correctly, the depth parameter you are talking about is 
 useless,
 except in case there are trust signature. And you agreed with me for them to 
 be
 taken out of the equation.
 
 The depth parameter is useful even without trust signatures.  Peter Lebbings
 response upthread describes the scenario.

Indeed. Thanks for your answer, clarifying once again what signatures mean ! (I
know, I'm slow to understand, but I think I'm OK no.)

Cheers,

Leo

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: trust your corporation for keyowner identification?

2013-11-07 Thread Leo Gaspard
On Thu, Nov 07, 2013 at 08:10:11PM +0100, Leo Gaspard wrote:
 I'm sorry, I think I gave too much importance to your earlier statement
 (Signing is to be an attestation to the validity of the key.) [...]

Sorry again, just noticed it actually wasn't you statement, but Paul's !

So, double mistake...

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Signing keys on a low-entropy system

2013-11-07 Thread Leo Gaspard
(Failed again to answer to list. I really ought to replace this shortcut...)

On Fri, Nov 08, 2013 at 12:11:38AM +0100, Johannes Zarl wrote:
 Hi,

 I'm currently thinking about using a raspberry pi as a non-networked stand-
 alone system for signing keys. Since I haven't heard anything to the contrary,
 I'm pretty sure that entropy is relatively scarce on the pi.

I heard haveged is quite good at gathering entropy from anywhere it can
(processor cycles, etc.)

 How is GnuPG affected by such a low-entropy system? Will operations just take
 a bit longer, or can this affect the quality/security of generated keys or
 signatures?

 I heard that low entropy or a bad entropy source is generall less of a problem
 for RSA. Is this true? Does this affect me in practice?

In theory, if /dev/random is configured to allow only random enough data to
pass, it should just mean operations would just take longer. However, I am not
absolutely sure of this -- but I know in theory /dev/random ensures some minimum
entropy, thus sometimes blocking reads.

Cheers  HTH,

Leo

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: trust your corporation for keyowner identification?

2013-11-06 Thread Leo Gaspard
(Sorry, failed again to reply to the list, so you probably have this message
twice again.)

On Tue, Nov 05, 2013 at 05:32:38PM -0800, Paul R. Ramer wrote:
 On Tuesday 5 November 2013 at 11:03:19 PM, in
 mid:52797937.5090...@gmail.com, Paul R. Ramer wrote:
 
  But if you sign it with an exportable
  signature, you are saying to others that you have
  verified the key.
 
 In the absence of a published keysigning policy, isn't that an
 assumption?
 
 Signing is to be an attestation to the validity of the key. [...]

Well, thus my reasoning (last message) allows me to prove that I can have the
same level of confidence in Key 2 than in Key 1, even though I have not done
again all the steps of verification.

Thus, signing being an attestation of the validity of the key (I assume you
meant of the confidence in the validity of the key), why should one sign Key 1
and not Key 2 ?

For the same reason, signing (and exporting signatures) based on people I
blindly trust is not an issue to me. (I know, I just released the troll.)
Because if I blindly trust these persons, I believe with absolute certainty that
the person is who (s)he says (s)he is. And so I can announce this certainty by
signing the key. (I use the term blindly to mean even more than the technical
ultimately, as this one could be expressed using trust signatures. Just really
blindly trust, as when you would let them to decide your fate, knowing they
could be better off by sending you to hell.) Of course, if I sign the key only
because it is validated through technical means, not by hand-checking for a
signature from a blindly trusted owner, I would never sign that other key.

The fact that others could get just the same effect by twisting their WoT
parameters is not an issue to me. Firstly, because there are few trust
signatures (according to best practices I read, that said trust signatures are
mainly made for closed-system environments), so WoT rarely expands outwards of
one signature by someone you know. But mostly because signing is an attestion of
your belief someone is who (s)he is. Thus, if you believe someone is who the UID
states (s)he is as much as if you met him/her in person and followed the whole
verification process, I would not mind your exporting signatures of the key.

And saying that it allows the blindly trusted person to force you to see a key
as validated through three persons you marginally trust is meaning nothing to
me. Indeed, these three persons are all asserting they believe with certainty
that the key owner is who (s)he says (s)he is. That all used the same
information source is just commonly done.

Indeed, how do you check an identity ?
 * Name : Passport. Any government could make a passport as wanted, not even
  speaking about forgery. Thus everyone you know who signed some UID
  probably based their verification work on a single passport.
 * Comment : Depends of the comment. For CEO company X, it is probably based
 on public archives. Them referring to a person by his/her name, any
 forged passport also means forged name.
 * Email : Probably a mere exchange of emails. Thus, anyone doing MitM could
   intercept the exchange and reply so as to make you validate the key,
   and even without MitM, the email provider could do as well.
Every time, the certainty of the UID element is heavily dependent on other's
work. Thus, why should we refuse to base our work on other's signatures ?
(*assuming* you believe in the UID validity as much as you would have done using
full verification)

I just found a counter-example : in case the message (signed by Key 1) telling
owner of Key 1 is owner of Key 2 is signed by a subkey, which might have been
compromised. However, I assumed such a message would only be sent signed using
the master key, as it must be totally relied upon. Thus, anyone able to forge
such a message would be able to forge any message using the master key, and
especially to add new encryption subkeys... Thus, such a scenario is not a
threat IMHO.

Cheers,

Leo

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: trust your corporation for keyowner identification?

2013-11-05 Thread Leo Gaspard
On Tue, Nov 05, 2013 at 12:40:11AM -0800, Paul R. Ramer wrote:
 I don't know how I can explain it any better than I have. I think you are 
 confusing assertion with verification.  Unless you can differentiate between 
 the two in this case, I don't think you will see what I am talking about.
 
 [...]
 
 I guess all I can say is that one should have a key signing policy to let 
 others know how he verifies keys.
 
 There. I said it all over again, just differently (and a whole lot more).

OK, I think I understood your point. (That is, assertion is not as strong as
verification.)

However, I think in this case (assuming there are no more UID on key 2 than on
key 1), assertions are sufficient, *because* there are two assertions, one in
both ways.

I mean :
 * Owner of Key 1 says (s)he is owner of Key 2 (through signed message saying
   you so)
 * Owner of Key 2 says (s)he is owner of Key 1 (through signed UID on Key 2)

So, except in case of collusion between owners of Keys 1 and 2, I believe there
is no way one can be wrong in signing Key 2 (of course, if Key 1 is signed).

IIUC, your point is that verification would enable one to avoid collusion, as it
is the only flaw I can see in this verification scheme.
Except collusion can not be avoided in any way, AFAIK.

If that is not your point, could you exhibit a scenario in which there is a
signed UID on Key 2, a signed statement from Key 1 owner saying he owns Key 2,
and Key 2 not being usable by Key 1 owner ? (Of course, excepting collusion,
which as stated above can not be avoided.)

Cheers,

Leo

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: trust your corporation for keyowner identification?

2013-11-04 Thread Leo Gaspard
On Mon, Nov 04, 2013 at 01:44:51PM -0800, Paul R. Ramer wrote:
 MFPA expires2...@ymail.com wrote:
 Why do we need to establish they can also sign? Isn't it enough to
 demonstrate they control the email address and can decrypt, by signing
 one UID at a time and sending that signed copy of the key in an
 encrypted email to the address in that UID?
 
 You are right.  Decryption is sufficient to demonstrate control of the 
 private key, because if he can decrypt, he can also sign.  What I said, 
 decrypt and sign, was redundant.

Well... I still do not understand why decryption is sufficient to demonstrate
control of the private key and not adding a UID (note I'm talking about signed
UID's, not unsigned ones, of course).
Sorry.

Cheers,

Leo

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: trust your corporation for keyowner identification?

2013-11-02 Thread Leo Gaspard
(Sorry, I once again sent the message only to you and not to the list -- I
really need to get used to mailing lists, sorry !)

On Sat, Nov 02, 2013 at 07:08:15PM -0700, Paul R. Ramer wrote:
 On 11/02/2013 02:25 PM, Leo Gaspard wrote:
  Isn't the presence of a UID sufficient for this matter ?

 No, it is not.  Here is why.  When you verify a key to sign you are
 verifying the following:

 1) For each UID, that the name is correct and that the purported owner
 has control of the email in that UID (possibly also verifying the
 comment if it contains something such as CEO ABC Corporation).
 2) That the purported owner has control of the key and can decrypt and
 sign messages.

 [...]

Well...
 1) Checked by the other key's message. Because signed (K1) message from Alice,
saying she has access to K2, means any UID on K2 named Alice is as right as
the equivalent UID on K1. So the UIDs are correct.
 2) Checked by the presence of the UID. Because, to add a UID, one must have
control of the secret key, and thus be able to decrypt / sign messages with
it. And, as stated in (1), the UIDs are valid. So Alice, who added the UIDs,
must have access to the secret key.

The only case I could find of (2) invalid would be if Alice herself tried to
trick you into signing a key with her name but used by Bob. Except it turns
out that she could just as well have the key for the time of the key exchange,
and then pass it to Bob.

Where am I wrong ?

Cheers,

Leo

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: The symmetric ciphers

2013-10-31 Thread Leo Gaspard
 The reason why the cryptanalytic community looked into whether DES forms a
 group is because the 56-bit keyspace was too short and we critically needed
 a way to compose DES into a stronger algorithm.  That's not the case with
 AES.

Disclaimer : I am not a mathematician, only a student in mathematics. I did not
learn mathematics in the English language, but have tried to check on wikipedia
the vocabulary I am about to use is the correct in english, but please pardon me
if I make a mistake. Anything I have not checked should be redefined. And this
text is not intended for people with no insight on basic group theory.


Definitions :
 * [1;n] will be the set of all integers from 1 to n, ends included.
 * M is the set of possible messages.
 * C is the set of possible ciphertexts.
 * F(M, C) is the set of encryption functions (key included), that take a
   message in M as input, and yields a ciphertext in C as output. IOW, it is the
   set of bijections from M to C.

Assumption : F(M, C) is a group for \circ, the composition, as any encrypted
message ought to be decipherable. (Well, not really a group, as the inverse
bijection would be in F(C, M), but I will write it is a group for ease of
expression. Correcting this would only be adding useless text, so feel free to
do it in you mind if you prefer.)


First, I'll assume that, when you say ROT is a group, you mean that
(n - ROTn) is a group morphism between (F(M, C), \circ) and (Z/26Z, +).


Let n be a positive integer.

So, now, let's assume K = [1; 2^n] is a group for some law *. Let's assume that
AES-n is a group morphism between (F(M, C), \circ) and (K, *).

In my opinion (and a bit more than that), it changes nothing to the question.
Indeed, composing two (or more) AES-n with independantly random-chosen keys is
at least as strong as one AES-n with a random-chosen key, which, IIRC, was the
heart of thhe matter.


As a proof, let's take k1 and k2 two independantly random-chosen keys in K.
Then, AES-n(k1) \circ AES-n(k2) = AES-n(k1 * k2).

Now, let's prove k1 * k2 is a randomly-chosen key in K. First, (x - k1 * x)
is a bijection. So, if x is chosen randomly, then so is k1 * x. And k2 is chosen
randomly (independantly from k1, which is quite important here), so k1 * k2 is a
randomly-chosen key in K.

Proof of the first statement :
Let a, b two keys in K. Then k1 * a = k1 * b implies a = b by mere
multiplication by k1^{-1}.
So (x - k1 * x) is an injection from K to K, and K is a finite set, so
(x - k1 * x) is a bijection on K.
Another way of seeing this would be by exposing the inverse : (x - k1^{-1} * x)
I know this is a well-known result, but preferred to redemonstrate it, just in
case.


So, whether AES-n is a group morphism or not does not matter for the question,
which was trying to find a resulting algorithm at least as strong as the
strongest of all.

And DES was checked for a group-like behavior because the objective was not to
create an algorithm at least as strong as the strongest component, but to create
an algorithm as strong as the sum of all components, which is substantially
harder.

BTW, the example about ROT still fits the proof : remember ROT0 could be
selected by a random key with probability 1/26. You can check ROTn \circ ROTm
(ie. ROT(n + m)) yields ROT0 with probability 1/26, when n and m are both chosen
uniformly. (Well, it's just 26 ways of making 26 from the sum of two numbers
from 0 to 25, this divided by the total possibilities of 26^2.)


As a conclusion, IMHO (and without proof here, just gut feeling, even though a
start of proof was given by Philipp earlier), stacking two algorithms with
unrelated randomly-chosen keys makes an algorithm at least as strong as the
strongest of the two algorithms, to the cost of transimitting a longer key and
spending more time on enc/decryption, which, admittedly, might be quite an
issue.


Hoping I did not make too much mistakes, both in mathematics and in the English
language,

Leo

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Recommended key size for life long key

2013-09-08 Thread Leo Gaspard
On Sun, Sep 08, 2013 at 03:15:24PM -0400, Avi wrote:
 As must I. Robert has one of the clearest modes of exposition from
 which I have ever been fortunate to benefit.

I have to agree on this point.

The issue is that I disagree with him on his stance : in my opinion, having a
schedule stating when the keylengths will become insecure is useless : we just
have to be able to predict that longer keys will most likely be at least as hard
to crack.

And this means that, as long as the drawbacks associated with the use of the key
are assumed by the key owner only (as the tables state, encrypt and verify
operations being almost unchanged in time), preconizing 10kbit RSA keys is no
issue, and can only improve overall security, to the key owner's usability's
detriment at most.

And each key owner might choose whatever keylength they feel best suits them,
according to their usability / security needs ; as long as these choices do not
impede other keyholders' usability or security.

BTW, the statement [Dan Boneh] proved that breaking RSA is not equivalent to
factoring is wrong : he did not prove that breaking RSA is easier than
factoring numbers ; only that a whole ways of proving that breaking RSA is as
hard as factoring numbers are ineffective ; thereby reducing the search space
for potential valid ways of proving the conjecture. Hence the title of the
article : Breaking RSA *may* not be equivalent to factoring.
Please pardon me if I misunderstood the english used in the abstract.

Oh, and... Please correct me if I am mistaken, but I believe the best we can do
at the moment, even with a quantum computer is Shor's algorithm, taking a time
of O(log^3 n). Thus, going from 2k keys to 10k keys makes if approx. 125 times
harder to break. Sure, not so wonderful as what it is today, but if the constant
is large enough (which, AFAIK, we do not know yet), it might be a practical
attack infeasible for a few more years.

So, back to the initial question, I do not understand why the article should be
judged so poor. No, to me the article is not about predicting 87 years in front
of us. To me, the article is about stating that using 10k RSA keys is not as bad
as one might think at first. The gist of the article is, to me, precisely this
set of tables.

And the first part is, still in my opinion, only here to explain why 10k RSA
keys were chosen for the experiment. Explaining that, according to our current
estimates, they might potentially resist until 2100, assuming no major
breakthrough is made until then in the cryptanalysis field. You might notice
that such precautions are taken in the article too.

So... I find the article interesting. I would not have thought everyday use of
a 10k key would have so little drawbacks. And, finally, I want to recall that
signing keys need not be the same as certifying key, thus allowing us to lose
the signature time only for certifications, and use normal keys the rest of
the time ; thus getting the best of both worlds, by being able to upgrade
signing keys to stronger ones without losing one's WoT. The only remaining
drawback being when others certify our master key.

Cheers,

Leo

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: understanding GnuPG --clearsign option

2013-08-12 Thread Leo Gaspard
On Mon, Aug 12, 2013 at 11:40:35AM +0300, Martin T wrote:
 Hi,
 
 one can sign the message with --clearsign option which adds ASCII
 armored(Radix-64 encoding) PGP signature at the end of the text.
 This PGP signature contains the UID of the signer, timestamp and key
 ID. However, two questions:
 
 1) Where is the UID of the signer, timestamp of the signature and
 signer key-ID stored? If I execute gpg2 --verify file.asc, then I'm
 able to see the UID of the signer, timestamp and signer key-ID, but if
 I decode the Radix-64/base64 data back to binary(base64 -d) and use
 hexdump -C to analyze this data, I do not see the UID, timestamp or
 signer key-ID.
 
 2) What exactly is this PGP signature? Is it a SHA1 hash of the
 message which is encrypted with my private key and then ASCII armored?

According to http://openpgp.org/technical/ the OpenPGP standard is RFC 4880.

So, as your question is quite technical, you should be able to find your answer
here : http://www.ietf.org/rfc/rfc4880.txt

Sorry for not being able to help you more!

Leo

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: Clarifying the GnuPG License

2013-06-13 Thread Leo Gaspard
On Wed, Jun 12, 2013 at 11:49:39AM +0200, Nils Faerber wrote:
 IANAL but from my understanding:
 1. by invocation of the commandline commands: Yes
 2. invocation of GnuPG exe: Yes
 3. Linking, dynamically or statically, against a GnuPG DLL, presumed
 that it is licensed under GPL: No

IANAL either, but wonder whether hard-coding the GPG program name and arguments
in your binary would not be sufficient to consider your program as linked to the
GPG executable.
This would mean the program would be bound by the GPL terms.
But, again, this is only a supposition, and you should get proper legal advice.

Cheers,

Leo

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users


Re: gpg for anonymous users - Alternative to the web of trust?

2013-03-27 Thread Leo Gaspard
Well... IMHO you did all what you had to/could do, if you want to keep
confidentiality : claiming your public key in association with your name on
several websites. Now, just hope no covert agency will try to impersonate you
until a lot of people verify and sign your public key.

On Tue, Mar 26, 2013 at 11:38:23PM +, adrelanos wrote:
 Yes, I agree, it's pretty much impossible to distinguish myself from a
 nation-state's covert agency. Hence, I only asked how to claim a pseudonym.
 
 David Chadwick:
  Its pretty much impossible to distinguish a nation-state's covert agency
  personnel who are masquerading as someone else from the real someone
  else. In the UK we have recently had examples of undercover agents
  infiltrating animal rights groups or similar as activists, forming deep
  emotional relationships with female members, moving in with them, having
  children with them, and then years later, after the group has been
  smashed, disappearing from the scene. One such lady victim saw the
  picture of a policeman years later (I think in a newspaper) and
  recognised him as the father of her child, which is when the scam was
  blown open. So in short, these agencies do not find it difficult to do
  anything that they need or want to do
  
  regards
  
  David
  
  On 26/03/2013 17:36, Johnicholas Hines wrote:
  The question is how to distinguish yourself from a nation-state's covert
  agency purporting to be an individual interested in anonymity; you need
  to do something that the agency would find difficult to do.
 
  Getting your name and key into difficult-to-corrupt archives will start
  a timer - eventually you can point to the archives as evidence that you
  are not a newcomer. Even an agency would find it difficult to change
  history.
 
  Spending money or effort forces a covert agency to also spend money or
  effort to replicate your behavior. For example, if you sent someone a
  bitcoin, they would have to spend some dollars to establish themselves
  as comparably credible. Unfortunately, they have deep pockets. Effort
  might be preferable to money, since leaves more ways that a covert
  agency might make a mistake, behaving in some characteristic way (e.g.
  some sort of automatic authorship attribution software might become
  available that revealed them to be a team rather than an individual).
  Steady effort at releasing patches over a decade might be moderately
  credible.
 
  Johnicholas
 
 
 
  ___
  Gnupg-users mailing list
  Gnupg-users@gnupg.org
  http://lists.gnupg.org/mailman/listinfo/gnupg-users
 
  
 
 
 ___
 Gnupg-users mailing list
 Gnupg-users@gnupg.org
 http://lists.gnupg.org/mailman/listinfo/gnupg-users

___
Gnupg-users mailing list
Gnupg-users@gnupg.org
http://lists.gnupg.org/mailman/listinfo/gnupg-users