On Fri, 24 Feb 2012 11:19, [email protected] said:

> And *if* (big if) there isn't an acceptable worst-case overhead for a
> compression algorithm, there is probably a cut-off in GnuPG, or it would

No there is none.  As a proper Unix tool gpg works fine in a pipeline
and thus can't roll back a large amount of data to implement such a
cut-off.

> become a DoS attack vector: get someone to encrypt a specially crafted
> file that will fill his filesystem when the compression algorithm is run

There is an optional cut-off option for for decompression:

  @item --max-output @code{n}

  This option sets a limit on the number of bytes that will be generated
  when processing a file. Since OpenPGP supports various levels of
  compression, it is possible that the plaintext of a given message may be
  significantly larger than the original OpenPGP message. While GnuPG
  works properly with such messages, there is often a desire to set a
  maximum file size that will be generated before processing is forced to
  stop by the OS limits. Defaults to 0, which means "no limit".
  
Shalom-Salam,

   Werner

-- 
Die Gedanken sind frei.  Ausnahmen regelt ein Bundesgesetz.


_______________________________________________
Gnupg-users mailing list
[email protected]
http://lists.gnupg.org/mailman/listinfo/gnupg-users

Reply via email to