Re: More problems with hash functions

2004-08-25 Thread Daniel Carosone
My immediate (and not yet further considered) reaction to the
description of Joux' method was that it might be defeated by something
as simple as adding a block counter to the input each time.

I any case, I see it as a form of dictionary attack, and wonder
whether the same kinds of techniques wouldn't help.

--
Dan.


pgpRyiFiNhXGq.pgp
Description: PGP signature


Hashes, splints, and PRNGs

2004-09-01 Thread Daniel Carosone
I'm really enjoying the current discussion about hash constructions
and splints for current algorithms.  I will make one observation in
that discussion, which is that the proposal for a Hnew (2n - n) seems
a little beyond the scope of a field splint that can be done using
existing tools and implementations, and so I prefer some of the other
proposals that don't require this new element.

The discussion also gives me the excuse to ask a question I've been
wanting to pose to this list for a long time, regarding the techniques
used by the NetBSD /dev/random pseudo-device.

Although designed and used for different purposes, the discussion here
in the past few days has similarities to that construction.  I hope
that by raising it, I can both add something to the discussion, and
also get feedback from this esteemed company on its general worthiness
for original purposes.

I've attached the source code for the core part of the accumulator and
output function, which is reasonably well commented. (I hope; the
comments are largely mine, the code is very largely not).  

The general construction is an entropy pool (equivalent to the state
passed from block to block in the diagrams we've been using to talk
about Joux' attack) into which data (input blocks of 32-bit words) are
mixed.  The first observation here is that the state path between
blocks is very much wider than the blocksize of either input or
output, as desired.

The input mixing is done my modelling the state array as 32 parallel
LFSR PRNG's, one per bit-plane, in combination with a rotation of the
input and output around each of the LFSR's (so that none of them runs
for many cycles and becomes vulnerable to the well-known limitations
of LFSR's run too long).

In the normal usage of this device, the input data is timing
information from various device drivers (disks, keyboards, etc). It
can also include user-supplied data written to /dev/random, and so
collision-resistance does become at least potentially an issue.

Output is done by taking a SHA-1 hash of the whole state array, and
then folding it in half (XOR) before giving it to the user. (The
unfolded SHA-1 result is also mixed back into the pool as above, so
that successive calls produce different results as a PRNG).  Again,
this is reminiscent of using a n-bit hash to claim only n/2 bits of
security, though obviously this usage originated out of different
concerns.

So, looking at this in the context of present disussions, and
pretending this construct is used purely as a (hybrid) hash function,
it has the following characteristics:

 - 32-bit input block
 - 80-bit output block
 - wide path (4096 bits in default configuration) between blocks
 - Hnew = TT ( H2 (H1 (M) ))
 - probably only effective for messages longer than some multiple of
   the internal state size

Joux' arguments certainly apply to this construction (which was in any
case designed to preserve entropy rather than for collision resistance
against wholly-known different inputs).  If collisions can be found
for H1, then H2 adds nothing (its purpose is whitening of the internal
state, in the original usage).

Anyway, some questions for the floor:

 - How much relevance and similarity is there between PRNG style
   constructs used this way and hash compression functions?  Normally,
   a PRNG is designed to produce many bits from few, and a hash the
   reverse. However, using the PRNG this way tries to produce many
   state bits to carry forward, rather than output bits.

 - If contemplating a construct such as this as a hash-like function,
   any hope of Joux-resistance relies on the large internal state. How
   could this be made stronger? One thought would be to clock the
   output function (a SHA-1 of the pool) and mix that back in every N
   inputs - or is this just needless work?

 - Is the whole thing pointless (for this purpose) and does it just
   amount to a SHA-1 of TT(M) if M is wholly-known?

 - Finally (and of most interest to me practically), can anyone see a
   problem with the construct for its intended usage as a PRNG.  In
   particular, can anyone see a way to feed/flood it with known input
   such that the output becomes weaker (if M is partially known or
   partially chosen)?

--
Dan.

PS: if looking at the code, please ignore the entropy count parts;
they work with an estimator that implements the (non-)blocking nature
of /dev/(u)random, but this is known to be highly arbitrary and
(hopefully) overly conservative.
/*  $NetBSD: rndpool.c,v 1.17 2002/11/10 03:29:00 thorpej Exp $*/

/*-
 * Copyright (c) 1997 The NetBSD Foundation, Inc.
 * All rights reserved.
 *
 * This code is derived from software contributed to The NetBSD Foundation
 * by Michael Graff [EMAIL PROTECTED].  This code uses ideas and
 * algorithms from the Linux driver written by Ted Ts'o.
 *
 * Redistribution and use in source and binary forms, with or without
 * modification, are permitted provided that the following conditions
 * are met:
 

Re: entropy depletion

2005-01-27 Thread Daniel Carosone
On Tue, Jan 11, 2005 at 03:48:32PM -0500, William Allen Simpson wrote:
 2.  set the contract in the read() call such that
 the bits returned may be internally entangled, but
 must not be entangled with any other read().  This
 can trivially be met by locking the device for
 single read access, and resetting the pool after
 every read.  Slow, but it's what the caller wanted!
 Better variants can be experimented on...

 Now I don't remember anybody suggesting that before!  Perfect,
 except that an attacker knows when to begin watching, and is assured
 that anything before s/he began watching was tossed.

The point is interesting and well made, but I'm not sure it was
intended as a concrete implementation proposal.  Still, as long as
it's floating by..

Rather than locking out other readers from the device (potential
denial-of-service, worse than can be done now by drawing down the
entropy estimator?), consider an implementation that made random a
cloning device.  Each open would create its own distinct instance,
with unique state history.

One might initialise the clone with a zero (fully decoupled, but
potentially weak as noted above) state, or with a copy of the base
system device state at the time of cloning, but with a zero'd
estimator. From there, you allow it to accumulate events until the
estimator is ready again.  Perhaps you clone with a copy of the state,
and allow an ioctl to zero the private copy.. perhaps you only allow a
clone open to complete once the base generator has been clocked a few
times with new events since the last clone.. many other ideas.  Most
implementations allow data to be written into the random device, too,
so a process with a cloner open could add some of its own 'entropy' to
the mix for its private pool, if it wanted additional seeding
(usually, such writes are not counted by the estimator).

The base system device accumulates events while no cloners are
open. You'd need some mechanism to distribute these events amongst
accumulating clones such that they'd remain decoupled.

It's an interesting thought diversion, at least - but it seems to have
as many potential ways to hurt as to help, especially if you're short
of good entropy events in the first place.

Like smb, frankly I'm more interested in first establishing good
behaviour when there's little or no good-quality input available, for
whatever reason.

--
Dan.


pgpiBemAGKvdQ.pgp
Description: PGP signature


Re: Is 3DES Broken?

2005-02-02 Thread Daniel Carosone
On Mon, Jan 31, 2005 at 10:38:53PM -0500, Steven M. Bellovin wrote:
 When using CBC mode, one should not encrypt more than 2^32 64-bit 
 blocks under a given key.  That comes to ~275G bits, which means that 
 on a GigE link running flat out you need to rekey at least every 5 
 minutes, which is often impractical. 

Notably for those encrypting data at rest, it's also rather smaller
than current hard disk sizes, which are much harder to re-key.

(Even for those only encrypting data in flight, it has practical
implications regarding the feasibility of capturing that data for later
analysis)

--
Dan.


pgpeucg0rdznT.pgp
Description: PGP signature


Re: SSL stops credit card sniffing is a correlation/causality myth

2005-06-01 Thread Daniel Carosone
On Tue, May 31, 2005 at 06:43:56PM -0400, Perry E. Metzger wrote:
  So we need to see a Choicepoint for listening and sniffing and so
  forth.
 
 No, we really don't.

Perhaps we do - not so much as a source of hard statistical data, but
as a source of hard pain.

People making (uninformed or ill-considered, despite our best efforts
to inform) business and risk decisions seemingly need concrete
examples to avoid.

Its depressing how much of what we actually achieve is determined by
primitive pain response reflexes - even when you're in the beneficial
position of having past insistences validated by the pain of others.

 The day to day problem of security at real financial institutions is
 the fact that humans are very poor at managing complexity, and that
 human error is extremely pervasive. I've yet to sit in a conference
 room and think oh, if I only had more statistical data, but I've
 frequently been frustrated by gross incompetence.

Amen.

--
Dan.


pgppCusu69AQW.pgp
Description: PGP signature


Re: encrypted tapes (was Re: Papers about Algorithm hiding ?)

2005-06-07 Thread Daniel Carosone
On Tue, Jun 07, 2005 at 07:48:22PM -0400, Perry E. Metzger wrote:
 It happens because some idiot web designer thought it was a nice
 look, and their security people are too ignorant or too powerless to
 stop it, that's why.
 
 It has nothing to do with cost. The largest non-bank card issuer in
 the world can pay for the fifteen minutes of time it would take to fix
 it by putting the login on a separate SSL protected page. It has
 nothing to do with ease of use or tools that default safe. The
 problem is that they don't know there is anything to fix at a level
 of the firm that is capable of taking the decision to fix it.

It may well be that, rather than the fault of the web designer, an
explicit business requirement was presented to do this, because of
perceived ease of use for the customer...  and then the lack of
knowing any better to kill the idea before it takes hold.

At least, that's how it's happened in several places I've been
involved, and it took more than a little effort to make them
understand why it was a bad idea.  Thankfully, they were paying for my
time and advice, and *also* had the sense to listen to it when given -
the two don't always go together - but in this sense security did cost
them something.

The customer ease-of-use argument is quite easy to see - and also
quite easy to provide, if they want to, by putting the normal company
homepage that contains the form on SSL too.  I've had conversations
about the cost of doing that with knowledgable web designers (largely
centering on image caching concerns), and really, it isn't quite free,
even if the costs come from unreasonable and annoying places. Those
costs can have returns, even non-risk ones like being better able to
track users' browsing patterns and site navigation, too - but just by
virtue of having to have the conversation, its no longer free,
especially if people bring organisational politics to the table.

 Security these days is usually bad not because good security is
 expensive, or because it is particularly hard. 

These are the things that make the difference between middling-fair
and somewhat-decent security (let alone good security, which requires
many other things to be Done Right, more operational than technical).

The irony is that bad IT security (just like any other bad IT) is
expensive, often much more expensive - even without considering the
potential costs involved if the risks are realised.

 It is bad because even people at giant multinational corporations
 with enough budget to spare are too dumb to implement it.

Worse than that: people at the giant multinational corporations who
provide the outsourced IT services to those other corporations - and
who sell their services based on 'economies of scale' and 'industry
expertise' and 'best practice', and who are thus charged with the
responsibility of knowing better - are too dumb to implement it.

And then they're too dumb to implement it at other customer sites even
when that one rare customer comes along who knows better (or at least
knows to ask for independent outside help), and has fought hard to
convinve their outsource provider of the need and economic sense of
not being dumb on their behalf.  Most organisations just let them get
away with it, and think they're getting a good deal, while the very
basis on which that deal was sold is defeated.

If it weren't for all the other people that get hurt by the schrapnel,
it would be very hard to continue trying to help people who seem to
like walking around with bullet holes in their shoes.

--
Dan.


pgpj1TA56flkt.pgp
Description: PGP signature


Re: solving the wrong problem

2005-08-09 Thread Daniel Carosone
On Tue, Aug 09, 2005 at 01:04:10AM +1200, Peter Gutmann wrote:
 That sounds a bit like unicorn insurance
 [..]
 However, this is slightly different from what Perry was suggesting.
 There seem to be at least four subclasses of problem here:
 
 1. ??? : A solution based on a misunderstanding of what the real problem is.
 
 2. Unicorn insurance: A solution to a nonexistent problem.
 
 3. ???: A solution to a problem created artificially in order to justify its
solution (or at least to justify publication of an academic paper
containing a solution).
 
 4. PKI: A solution in search of a problem.

Nice list, and terms for the remaining ??? cases would be nice, but
I'm not sure that any of these captures one essential aspect of the
problem Perry mentioned, at least as I see it.

One of the nice aspects of the snake oil description is the
implications it has about the dodgy seller, rather than the product.
To my view, much of the Quantum Cryptography (et al) discussion has
this aspect: potentially very cool and useful technology in other
circumstances, but being sold into a market not because they
particularly need it, but just because that's where the money is.
Certainly, that's the aspect I find most objectionable, and thus
deserving of a derogatory term, rather than just general frustration
at naive user stupidity.

None of the terms proposed so far capture this aspect.  The specific
example given doesn't quite fit anywhere on your list.  It's somewhere
between #3 and #4; perhaps it's a #4 with a dodgy salesman trying to
push it as a #3 until a better problem is found for it to solve?

I was going to suggest porpoise oil (from not fit-for-purpose),
but how about unicorn oil - something that may well have some
uncertain magical properties, but still sold under false pretenses,
and not really going to cure your ills?

--
Dan.



pgp6yaUOzDOti.pgp
Description: PGP signature


Re: GnuTLS (libgrypt really) and Postfix

2006-02-16 Thread Daniel Carosone
On Tue, Feb 14, 2006 at 04:26:35PM -0500, Steven M. Bellovin wrote:
 In message [EMAIL PROTECTED], Werner Koch writes:
 I agree.  However the case at hand is a bit different.  I can't
 imagine how any application or upper layer will be able to recover
 from that error (ENOENT when opening /dev/random).  Okay, the special
 file might just be missing and a mknod would fix that ;-).  Is it the
 duty of an application to fix an incomplete installation - how long
 shall this be taken - this is not the Unix philosophy.
 
 It can take context-specific error recovery.  Maybe that's greying out 
 the encrypt button on a large GUI.  Maybe it's paging the system 
 administrator.  It can run 'mknod' inside the appropriate chroot 
 partition, much as /sbin/init on some systems creates /dev/console.  It 
 can symlink /dev/geigercounter to /dev/random.  It can load the kernel 
 module that implements /dev/random.  It can do a lot of things that may 
 be more appropriate than exiting.  

Or an even simpler example: maybe it will still be a fatal error, but
there's some important state outside the library being called that it
should clean up before exiting so abruptly.   

Somehow, applications that are consumers of crypto libraries seem like
likely candidates for this sort of thing.

--
Dan.

pgpJyhp2aeO8S.pgp
Description: PGP signature


Re: Creativity and security

2006-03-24 Thread Daniel Carosone
On Thu, Mar 23, 2006 at 08:15:50PM -, Dave Korn wrote:
 
   As we all know, when you pay with a credit or debit card at a store, it's 
 important to take the receipt with you
 [..]
   So what they've been doing at my local branch of Marks  Spencer for the 
 past few weeks is, at the end of the transaction after the (now always 
 chip'n'pin-based) card reader finishes authorizing your transaction, the 
 cashier at the till asks you whether you actually /want/ the receipt or not; 
 [..] 
   ... Of course, three seconds after your back is turned, the cashier can 
 still go ahead and press the button anyway, and then /they/ can have your 
 receipt.
 [..]
 I think the better solution would still be for the receipt 
 to be printed out every single time and the staff trained in the importance 
 of not letting customers leave without taking their receipts with them.

Two observations:

 - your preferred solution to a problem of fraudulent cashier staff
   doing the wrong thing ... relies on the cashier staff doing the right
   thing.  Training fraudulent and creative cashiers on the importance
   of this action probably encourages them to come up with other ways
   to do the same thing.

 - even when they've handed you a receipt, on many systems there's a
   good chance they can get a reprint those same three seconds later.
   Paper jams or gets torn, ribbons run out, and sometimes you
   legitimately need a duplicate.

--
Dan.

pgpwSsJTGLOWq.pgp
Description: PGP signature


Re: RSA SecurID SID800 Token vulnerable by design

2006-09-15 Thread Daniel Carosone
On Thu, Sep 14, 2006 at 02:48:54PM -0400, Leichter, Jerry wrote:
 | The problem is that _because there is an interface to poll the token for
 | a code across the USB bus_, malicious software can *repeatedly* steal new
 | token codes *any time it wants to*.  This means that it can steal codes
 | when the user is not even attempting to authenticate

 I think this summarizes things nicely.

Me too.  While less emphatic, my reaction to Vin's post was similar to
Thor's.. that it seemed to at least miss, if not bury, this point.

But let's not also forget that these criticisms apply approximately
equally to smart card deployments with readers that lack a dedicated
pinpad and signing display.  For better or worse, people use those to
unlock the token with a pin entered on the host keyboard, and allow
any authentication by any application during a 'session', too.

 Pressing the button supplies exactly the confirmation of intent that
 was lost.

And further, a token that includes more buttons (specifically, a
pinpad for secure entry of the 'know' factor) is vulnerable to fewer
attacks, and one that permits a challenge-response factor for specific
transaction approval is vulnerable to fewer again (both at a cost).
Several vendors have useful offerings of these types.

The worst cost for these more advanced methods may be in user
acceptance: having to type one or more things into the token, and then
the response into the computer.  A USB connected token could improve
on this by transporting the challenge and response, displaying the
challenge while leaving the pinpad for authentication and approval.

But the best attribute of tokens is they neither use nor need *any*
interface other than a user interface. Therefore:

 * it works anywhere with any client device, like my phone. I choose
   my token model and balance user overhead according to my needs.

 * it is simple to analyse.  How would you ensure the above ideal
   hypothetical USB token really couldn't be subverted over the bus?

By the time you've given up that benefit, and done all that analysis,
and dealt with platform issues, perhaps you might as well get the
proper pinpad smartcard it's starting to sound like, and get proper
signatures as well, rather than using a shared-key with the server.

--
Dan.


pgpotlLAwVOtZ.pgp
Description: PGP signature


Re: Seagate announces hardware FDE for laptop and desktop machines

2007-10-03 Thread Daniel Carosone
On Tue, Oct 02, 2007 at 03:50:27PM +0200, Simon Josefsson wrote:
 Without access to the device (I've contacted Hitachi EMEA to find out if
 it is possible to purchase the special disks) it is difficult to infer
 how it works, but the final page of the howto seems strange:
 
 ...
 
NOTE: All data on the hard drive will be accessible. A secure erase
should be performed before disposing or redeploying the drive to
avoid inadvertent disclosure of data.
 
 One would assume that if you disable the password, the data would NOT be
 accessible.  Making it accessible should require a read+decrypt+write of
 the entire disk, which would be quite time consuming.  It may be that
 this is happening in the background, although it isn't clear.

 It sounds to me as if they are storing the AES key used for bulk
 encryption somewhere on the disk, and that it can be unlocked via the
 password.

Assumption: clearing the password stores the key encrypted with
password  or an all-zeros key, or some other similar construct,
effectively in plain text.

 So it may be that the bulk data encryption AES key is
 randomized by the device (using what entropy?) or possibly generated in
 the factory, rather than derived from the password.

Speculation: the drive always encrypts the platters with a (fixed) AES
key, obviating the need to track which sectors are encrypted or
not. Setting the drive password simply changes the key-handling.

Implication: fixed keys may be known and data recoverable from factory
records, e.g. for law enforcement, even if this is not provided as an
end-user service.

--
Dan.


pgpbW81YLkONk.pgp
Description: PGP signature


Re: Gutmann Soundwave Therapy

2008-02-09 Thread Daniel Carosone
Others have made similar points and suggestions, not picking on this
instance in particular:

On Mon, Feb 04, 2008 at 02:48:08PM -0700, Martin James Cochran wrote:
 Additionally, in order to conserve bandwidth you might want to make a 
 trade-off where some packets may be forged with small probability (in the 
 VOIP case, that means an attacker gets to select a fraction of a second of 
 sound, which is probably harmless)

This is ok, if you consider the only threat to be against the final
endpoint: a human listening to a short-term, disposable conversation.
I can think of some counter-examples where these assumptions don't
hold:

 - A data-driven exploit against an implementation vulnerability in
   your codec of choice.  Always a possibility, but a risk you might
   rate differently (or a patch you might deploy on a different
   schedule) for conversations with known and trusted peers than you
   would for arbitrary peers, let alone maliciously-inserted traffic.
   How many image decoding vulnerabilities have we seen lately, again?

 - People have invented and do use such horribly-wrong things as
   fax-over-voip; while they seem to have some belief in their own
   business case, I may not have as much faith in their implementation
   robustness.
   
 - Where it's audio, but the audience is different such that the
   impact of short bursts of malicious sound is different: larger
   teleconferences, live interviews or reporting by journalists, and
   other occasions, particularly where the credibility of the speaker
   is important.  Fractions of seconds of sound is all I might need to
   insert to .. er .. emulate tourette's syndrome. Fractions of
   seconds of soundwave therapy could still be highly unpleasant or
   embarassing.

Particularly for the first point, early validation for packet
integrity in general can be a useful defensive tool against unknown
potential implementation vulnerabilities.  I've used similar arguments
before around the use of keyed authentication of other protocols, such
as SNMPv3 and NTP.

It also reminds me of examples where cryptographic protections have
only covered certain fields in a header or message.  Attackers may
find novel ways to use the unprotected space, plus it just makes the
whole job of risk analysis at deployment orders of magnitude more
complex.

Without dismissing the rest of the economic arguments, when it comes
to these kinds of vulnerabilities, be very wary of giving an attacker
this inch, they may take a mile.  

--
Dan.


pgpP4RjLsu7PD.pgp
Description: PGP signature


Re: Gutmann Soundwave Therapy

2008-02-13 Thread Daniel Carosone
On Mon, Feb 11, 2008 at 07:01:07PM +1300, Peter Gutmann wrote:
 Daniel Carosone [EMAIL PROTECTED] writes:
 [...]
 Particularly for the first point, early validation for packet integrity in
 general can be a useful defensive tool against unknown potential
 implementation vulnerabilities.
 
 This is an example of what psychologists call own-side bias (everyone thinks
 like us), in this case the assumption that after a peer has authenticated
 itself it'd never dream of sending a malformed packet and so we don't need to
 do any checking after the handshake has completed.  Why would you trust data
 coming from a remote system just because they've jumped through a few hoops
 before sending it?  I can steal the remote system's credentials or hijack the
 session and then send you anything I want, it's no safer to blindly accept the
 data if there's a MAC attached or not.

Point taken, but I respectfully disagree with the relevance in the
present context, though of course I agree entirely in the wider
philosophical sense.

Remember that we're also talking about practical deployment decisions:
 - if someone steals credentials, they can do all sorts of mischief
   and damage; the incremental risk in the present discussion is that
   doing 'lossy'/partial validation may allow additional injection and
   MITM attacks beyond those.
 - especially for the other cases I gave (SNMP and NTP), the
   alternative mitigating controls are such strong things as IP
   address based ACLs (on UDP packets). I'll take the stronger tool if
   it's on offer.

The fact this kind of authentication is applied before the packet gets
to more complex and potentially vulnerable parsing and processing code
gives me a valuable opportunity to be defensive, especially as an end
customer deploying some random vendor's kit.

In those cases, I don't have visibility of the implementation, but I
do have some assurance about the order of operations and can put that
structural knowledge to good use.  Much the same is true in this
discussion about protocol design; we're making no specifications about
processing of the data once the transport hands it off, but we're
starting to make assumptions about the risks therein, and the reliance
those layers may be placing on the transport, for better or worse.

Your criticism would be fair if I was advocating blindly accepting the
data or not doing any checking after the initial handshake.  It would
be fair criticism of a codec vendor who took such a stance, relying
overly on transport authentication (or forcing me to). I am most
certainly not advocating that, merely recognising that sometimes such
checking may be deficient or vulnerable, or just simply uncertain. 

Good defensive protocol design lets me validate the blob before
inspecting the fields; poor defensive programming conflates frame
validation with more detailed syntactic and semantic validation later.

If there are authentication-hijacking vulnerabilities in the endpoints
(like your SIP gateway), sure, I'm screwed in a number of ways.
That's sad, but a given regardless of whatever variant and detail of
keying and MACing mechanism this discussion comes up with.

 Hostile data inside a secure tunnel or wrapper is still hostile data. 

If cryptography can come up with some way to ensure robustness against
hostile data all the way down an implementation stack, regardless of
layering, we'll all be surprised.  Some of us might even be very rich.

Otherwise, it's a risk mitigation tool, subject to constraints we need
to understand.  If the constraints are ones of key management and
endpoint security, I can use the mechanism in my toolkit.  If the
constrains mean that every fourth SNMP request or routing update will
be unauthenticated, it's much less use to me as a structural security
layer; the bar isn't raised in any practical sense.

 As the OP said, as long as you can deterministically detect attacks
 (which a 1-of-n packet MAC will do) you're not giving up much
 security by not MAC'ing all packets.

I attempted to illustrate, with some counterexamples, threat models
where even one unauthenticated packet could lose you more than not
much security.  Threats where detection of the attack wasn't enough,
or was too late, or was itself the point.

Robust implementation behind that MAC is essential, and helps realise
and provide assurance around not much, as well as addressing broader
threats that are more likely in the overall economic argument we
acknowledged at the outset, but is outside what I saw as the OP's
scope, and not part of the incremental risk I was highlighting.

--
Dan.

pgpHBm0jjURiV.pgp
Description: PGP signature