RE: crypto question - using crypto to protect financial transactions

2002-04-08 Thread Amir Herzberg

I understand the goal of allowing secure and anonymous financial
transactions via the Net. I'm personally very interetested in this,
although I must admit I am also a bit concerned about the social
implications if this becomes a reality (or when it does, since I believe
it eventually will). What I'm concerned about is tax avoidance, esp. by
wealthy individuals and companies. Nobody likes taxation (at least
personally :-), but it is still the basis for operation of states - and
while changes may be good, they are also risky. 

Anyway, forgetting for a moment the question of should we do it, let's
focus on the question of how we do it :-) 

I looked up Andrew's site, and actually there're not too many details
there (yet?). I think his initial focus and question was on the issue of
whether one can trust one's public key to the financial server, and his
answer seems to be, you can if you split the key between several servers
using thershold or proactive signatures (proactive schemes allow
recovery from penetrations of servers - and btw, this is an area
deserving more implmentation efforts, beyond what we did in IBM). 

I think there may be even more critical hurdles for successful financial
crypto services. A very important one is interoperability between
different financial service providers (the companies that keep your
money... E.g. banks). Most crypto-financial efforts so far focused on a
centralized model - one bank - and that's much easier to design, but
very hard to succeed. I've done some work on secure interoperability
among providers - it was actually the main feature of IBM Micro
Payments. IBM have also applied for patent for some of the ideas. 

Another important issue is the automated management of trust and
reputation, allowing customers to make (automated) trust decisions on
providers of services and goods (including both financial services and
merchants). Here I agree with Andrew that for many applications,
financial transactions should not be reversible (disputed), and hence
trust and reputation becomes the main means for consumer protection. 

Regards, Amir Herzberg
See http://amir.beesites.co.il/book.html  for lectures and
draft-chapters from book-in-progress, `secure communication and commerce
using cryptography`; feedback welcome!
 



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: crypto question

2002-03-29 Thread Arnold G. Reinhold

At 12:23 PM -0700 3/24/02, [EMAIL PROTECTED] wrote:
or just security proportional to risk ...

While a valid engineering truism, I have a number of issues with that dictum:

1.  It is too often used as an excuse for inaction by people who are 
poorly equipped to judge either risk or cost.  We've all encountered 
the experts on tap, not on top attitude of many managements.  There 
was a good reason the U.S. centralized all crypto in the NSA after WW 
II. Managers in organizations like the State Department simply 
ignored known security compromises.  Communications security never 
had a high priority with functional managers, so it was taken away 
from them.

2. Costs are often overstated or quoted out of context. A $1000 
coprocessor that can verify 100 keys per second ends up costing under 
a millicent per verification, even allowing a large factor for peak 
demand.  The added cost to store long keys is tiny. Good engineering 
(often the biggest cost) can be spread over many applications. Cost 
of keeping up with security patches is likely modest compared to 24/7 
watchman security for a physical location.

3. The nature of risk is very different in cyberspace. Many 
cryptographic techniques introduce single points of failure.  Bonnie 
and Clide can't rob all the banks at once, but the wily hacker might. 
It may be cheaper to employ bullet-proof solutions than to really 
understand the risks in good enough approaches.

4. There is also the question of risk to whom. Many businesses seem 
to assume the the government will pick up the tab for a major cyber 
terrorism incident.  If business execs can say with a straight face 
that basic accounting principals are too difficult for them to grasp, 
imagine what they will say about a massive crypto failure. So in a 
sense taxpayers and  consumers are being asked to insure some of 
these risks.  I suspect they would gladly pay the added costs 
(pennies) to apply the best available technology.

5. There is a failure to distinguish between components and systems. 
It may be true that any real world system has holes, but that is no 
reason to give up on perfecting the tools used to build these 
systems. Incorporating known weaknesses into new designs is not 
justifiable, absent a compelling, fact-based, cost/security analysis.

Arnold Reinhold


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: crypto question

2002-03-24 Thread Jim Choate


On Fri, 22 Mar 2002, Arnold G. Reinhold wrote:

 I'm not sure what changes in your argument if you delete the word 
 physical. 

I don't think you understand what that means. I was responsible for a
multi-campus (at the time the largest private system ever built) computer
controlled real-time security system connected to the fire, telephone,
video, and computer networks. This involves mag switches, PIR's, thermal,
ultrasonic, microwave, mag stripe cards, etc. We even had a small reactor
on campus as well as a couple of Gutenburg bibles that my group was
partialy responsible for.

 Perhaps we should all just give up with this security  nonsense.

I'm not suggesting that at all. I -am- suggesting that one should never
under estimate ones opponents. If you could build it, so can they. If they
can build it they can spend time taking it apart. Do most security
organizations or systems have those sorts of time/resources? My experience
is they don't. The major issue is more one of responsibility/indemnity in
conflict with time. The longer a system remains unbroken the more likely
it is to be broken, the only significant caveat is if the system is
updated and modified often enough. Then there is a data collection issue
that limits what is -reasonable-.


 --


 There is less in this than meets the eye.

 Tellulah Bankhead
 [EMAIL PROTECTED] www.ssz.com
 [EMAIL PROTECTED]  www.open-forge.org




-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: crypto question

2002-03-24 Thread Jim Choate


On Sun, 24 Mar 2002 [EMAIL PROTECTED] wrote:
 
 or just security proportional to risk ... random refs:

There's a short coming with that view.

In order to apply realistic metrics to what that risk is (eg 1 in 100
years) one must have systems being broken in order to vet it. It's one
thing to state a axiom as you have done. It's a whole other one to apply
it within a time schedule, budget, and general social setting. The three
primary questions that occur when trying to give these real numbers
become:

-   How long between services checks

-   How long between system upgrade/replacement

-   How have other systems stood up to intentional attacks

The first is important to vet the continued opperation of an existing
systems. The second is important in respect to opportunity to subvert and
and the diffussion of 'classified' info out of controlled environments (eg
robber's girlfriend is student...who applied for an internship...who
copies the random page hither and yon...). And finaly this gives one a
real graps of cost and 'friction' (to borrow a military term).

A special note for three, this implies that at least some of the
mechanisms of the same 'class' are(!) being broken. If not then one really
has no way to make a metric. The only enginering answer is I don't
know; I make the distinction between political and organizations needs
and engineering ones.

The vast majority of security mechanisms fail on several of these
regularly. It's not intentional but unless you're running something with
the dispcipline of a military base or prison you're going to have
problems.

I don't believe there are enough deliberate public attacks to make the
third boundary condition relevant in most security situations. But on the
flip side, most security situations are really overly sensitive to their
probability. [1]

[1] Which is probably a good thing for the industry :)


 --


 There is less in this than meets the eye.

 Tellulah Bankhead
 [EMAIL PROTECTED] www.ssz.com
 [EMAIL PROTECTED]  www.open-forge.org



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: crypto question

2002-03-23 Thread Jim Choate


As someone who spent 5 years doing all the physical security for a major
university I can say that ALL physical systems can be broken. No
exception. The three laws of thermodynamics apply to security systems as
well. 

There is ALWAYS a hole.

On Thu, 21 Mar 2002, Arnold G. Reinhold wrote:

 It's not clear to me what having the human present accomplishes. 
 While the power was out, the node computer could have been tampered 
 with, e.g. a key logger attached.

 Who said you were allowed to lose power and stay secure? Laptops are 
 pretty cheap and come with multi-hour batteries.  There should be 
 enough physical security around the node to prevent someone from 
 tripping power.
 
 One approach might be to surround a remote node with enough sensors 
 so that it can detect an unauthorized attempt to physically approach 
 it.


 --


 There is less in this than meets the eye.

 Tellulah Bankhead
 [EMAIL PROTECTED] www.ssz.com
 [EMAIL PROTECTED]  www.open-forge.org



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: crypto question

2002-03-23 Thread Arnold G. Reinhold

There are groups with lots of money and dedicated, trained agents who 
are willing to die that would dearly like to steal a nuclear weapon. 
So far, they have not succeeded (if they do, I fear we will know 
about it quickly).  So someone has been able to do physical security 
right.

The problem is doing it in a way that is affordable and doesn't 
require an army. Designing computers that can detect an attack seems 
worth exploring. FIPS-140 envisions such an approach when it talks 
about wrapping security modules in a mesh of insulated wire whose 
penetration tells the module to zeroize.

I'm not sure what changes in your argument if you delete the word 
physical.  Perhaps we should all just give up with this security 
nonsense.


Arnold reinhold



At 11:28 PM -0600 3/21/02, Jim Choate wrote:
As someone who spent 5 years doing all the physical security for a major
university I can say that ALL physical systems can be broken. No
exception. The three laws of thermodynamics apply to security systems as
well.

There is ALWAYS a hole.

On Thu, 21 Mar 2002, Arnold G. Reinhold wrote:

 It's not clear to me what having the human present accomplishes.
 While the power was out, the node computer could have been tampered
 with, e.g. a key logger attached.

 Who said you were allowed to lose power and stay secure? Laptops are
 pretty cheap and come with multi-hour batteries.  There should be
 enough physical security around the node to prevent someone from
 tripping power.

 One approach might be to surround a remote node with enough sensors
 so that it can detect an unauthorized attempt to physically approach
 it.


 --


 There is less in this than meets the eye.

 Tellulah Bankhead
 [EMAIL PROTECTED] www.ssz.com
 [EMAIL PROTECTED]  www.open-forge.org



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: crypto question

2002-03-23 Thread Mike Brodhead


 The problem is doing it in a way that is affordable and doesn't 
 require an army. 

[snip]

 I'm not sure what changes in your argument if you delete the word
 physical.  Perhaps we should all just give up with this security
 nonsense.

:)

Agreed.  It's not about perfect security, it's about Good Enough
security.  Risk is not something we can eliminate, but it is something
we can manage.

It does not surprise me when non-security people forget that point,
but I am really surprised at how often security people seem to forget
it.

--mkb



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: crypto question

2002-03-23 Thread D. A. Honig

At 01:04 PM 3/21/02 -0500, Nelson Minar wrote:
Question.  Is it possible to have code that contains a private encryption
key safely?

As a practical matter, yes and no. Practically no, because any way you
hide the encryption key could be reverse engineered. Practically yes,
because if you work at it you can make the key hard enough to reverse
engineer that it is sufficient for your threat model.

This problem is the same problem as copy protection, digital rights
management, or protecting mobile agents from the computers they run
on. They all boil down to the same challenge; you want to put some
data on a computer you don't control but then restrict what can be
done with that data.

The fundamental issue is: who benefits from keeping the secret secret?
If the holder of the bankcard (or whatever) is liable for abuse
due to cracking, you are in a much better position than if the 
bank loses when a cracker cracks the card in his possession.

This of course does not help when an adversary *steals* access to the
secret in the bankcard.  It only helps when the holder of the secret
has an interest in keeping the secret.

One gathers from this discussion that the content-creator is worried
about content-users cracking their system; that is in general hopeless,
modulo the cost factors.  (And remembering what Schneier wrote about
all it takes is one cracker + the internet, if a crack tool is readily
copied.)

dh





-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: crypto question

2002-03-21 Thread Nelson Minar

Question.  Is it possible to have code that contains a private encryption
key safely?

As a practical matter, yes and no. Practically no, because any way you
hide the encryption key could be reverse engineered. Practically yes,
because if you work at it you can make the key hard enough to reverse
engineer that it is sufficient for your threat model.

This problem is the same problem as copy protection, digital rights
management, or protecting mobile agents from the computers they run
on. They all boil down to the same challenge; you want to put some
data on a computer you don't control but then restrict what can be
done with that data.

The digital rights management folks try to restrict the program that
uses the data; region-locked DVD players, digital music software that
obeys copyright restrictions (SDMI, etc), or the latest idea, having
an encrypted channel all the way to your speakers and monitor which
are secure tamper-proof devices. All of these schemes are defeatable,
but can be made quite difficult.

The mobile agent community has come up with some clever ideas on the
problem, but nothing that's a practical solution yet. The version here
is you want to run a program on a remote untrusted computer and you
want to prevent your computation from being subverted or stolen. It's
very hard, and my intuition was it'd be impossible, but in fact there
are some interesting thoeretical results that show it is possible, at
least in some limited domains.

I haven't followed this research recently, but here are some good
papers from a few years ago:

Towards Mobile Cryptography (1998)
Tomas Sander, Christian F. Tschudin
http://citeseer.nj.nec.com/167218.html
We present techniques how to achieve non--interactive computing
with encrypted programs in certain cases and give a complete
solution for this problem in important instances.

Protecting Mobile Agents Against Malicious Hosts
Tomas Sander, Christian F. Tschudin
http://citeseer.nj.nec.com/329367.html

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: crypto question

2002-03-21 Thread Arnold G. Reinhold

At 8:52 PM -0800 3/20/02, Mike Brodhead wrote:
  The usual good solution is to make a human type in a secret.

Of course, the downside is that the appropriate human must be present
for the system to come up properly.

It's not clear to me what having the human present accomplishes. 
While the power was out, the node computer could have been tampered 
with, e.g. a key logger attached.


In some situations, the system must be able to boot into a working
state.  That way, even if somebody accidentally trips the power-- I've
had this happen on production boxen --the system outage lasts only as
long as the boot time.  If a particular human (or one of a small
number of secret holders) must be involved, then the outage could be
measured in hours rather than minutes.

Who said you were allowed to lose power and stay secure? Laptops are 
pretty cheap and come with multi-hour batteries.  There should be 
enough physical security around the node to prevent someone from 
tripping power.

One approach might be to surround a remote node with enough sensors 
so that it can detect an unauthorized attempt to physically approach 
it. Web cams are pretty cheap. Several cameras and/or mirrors would 
be required to get 4Pi coverage.  Software could detect frame to 
frame changes that indicated an intrusion. The machine would be kept 
in a secure closet or cabinet. The the machine would be set up in 
what ever location by a trusted person or team and would remain 
conscious from then on. Entry would be authorized via an 
authenticated link. Any unauthorized entry would result in the node 
destroying it's secrets. It would then have to be replaced.


Don't forget that Availability is also an important aspect of
security.  It all depends on your threat model.


The approach I outlined offers very high availability.


Arnold Reinhold

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: crypto question

2002-03-21 Thread Pat Farrell

At 08:52 PM 3/20/2002 -0800, Mike Brodhead wrote:
 The usual good solution is to make a human type in a secret.
Of course, the downside is that the appropriate human must be present
for the system to come up properly.  

Yes, of course, that is why I wrote:
The usual bad solution is to store it in a secret place, or encrypted with 
a key kept elsewhere (source, secret file, LDAP, etc.)

as most operations don't want to wait for a human to type something.
As long as folks understand that they can't really have security,
then it is just an engineering tradeoff.

Several folks also wrote about using a SBO approach:
1) You are trying to distribute an obfuscated binary which
encrypts/decrypts using a secret key, with the goal that the key resist
reverse engineering. The usual application for this is DRM, but you can
also use this to do public-key encryption from any symmetric algorithm
(obfuscate the encryption function!).

To me, Security By Obscurity is known to be too weak to use,
and Security By Obfuscation is isomorphic to SBObscurity.
Consider the obfuscation with a strong cipher. Then all you have to
do is manage the keys.

One guiding principal of strong cryptography is that the algorithm,
and source code is well known. The key is what is unknown.
Other approaches tend to approach snake oil

The problem with the DRM model is not that the crypto won't work,
it will if the keys are managed. But I've not seen anyone willing
to work hard enough to manage the key distribution and local key
management to make it real.

None of this addresses the problem that you want to do trusted operations
on a user's PC that is inherently untrustable. For some applications,
eyewash such as smartcards provide the needed level of appearence
of security. If that fits your case, fine. And Carl Ellison has
a great patent for a software-only smartcard, it was transfered to CyberCash,
and I assume transfered to Verisign. It proves that anything 
you want to do with a smartcard you can do with software in a client/server
model. Pretty cool.

Pat


Pat Farrell [EMAIL PROTECTED]
http://www.pfarrell.com


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



RE: crypto question

2002-03-21 Thread McMeikan, Andrew

Many thanks on all the pointers and interest.

Although I was planning on sneaking around making more progress before
letting the cat out the bag, I guess it is time to expose it for some open
criticism.

This is just a plan so far, no code yet.  Although until the ability to
safely split encryption code across nodes, it will have to have a central
(or group of trusted) servers, rather than fully distributed.

You will probably all point out many obvious pit-falls, if you do please
also offer suggestions ;)

I have thought of several ways of getting the job done, but I am sure there
are better.

Apologies to those I emailed a blank file to, I managed to wipe a
significant amount of work, and have replaced it with something really
tacked together.

If I am stepping to hard on any patents, or too close to any other 'business
model' etc... A polite nudge is much better than a law suit.  Thanks.

http://pktp.sourceforge.net has a description of how I imagine it working.

I hope that explains exactly why I was making my enquiry.

Again many thanks for the many pointers.

cya,Andrew...

This e-mail and any attachment is for authorised use by the intended recipient(s) 
only.  It may contain proprietary material, confidential information and/or be subject 
to legal privilege.  It should not be copied, disclosed to, retained or used by, any 
other party.  If you are not an intended recipient then please promptly delete this 
e-mail and any attachment and all copies and inform the sender.  Thank you.

-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: crypto question

2002-03-20 Thread Pat Farrell

At 01:45 PM 3/21/2002 +1100, McMeikan, Andrew wrote:
Question.  Is it possible to have code that contains a private encryption
key safely?  Every way I look at it the answer seems no, yet some degree of
safety might be possible by splitting an encrypting routine across several
nodes.  Can someone give me a pointer to any work in this area?

I don't believe so, but maybe someone else on the list has a better answer.
Secret splitting will clearly make it harder for Mallet to gather the key.

In the past Atalla (later Compaq, now HP) and Harris sold hardware boxes that
kept keys in tamper proof boxes. They worked because opening the box lost the
key. Banks used them heavily in the late 1990s.

The usual good solution is to make a human type in a secret.
The usual bad solution is to store it in a secret place, or encrypted with
a key kept elsewhere (source, secret file, LDAP, etc.)

The old CyberCash wallet, which used strong RSA keys, used simple 56bit DES
to protect the private key on the local PC's hard disk. The thinking was
that user won't use more entropy in their keys to really justify 3DES,
and once one has physical access to the computer and hard drive, there
are simpler attacks than breaking the crypto on the key: keystroke sniffers being
one obvious example.

I'd also love to hear of real solutions to protecting a key stored on local disk

Pat



Pat Farrell [EMAIL PROTECTED]
http://www.pfarrell.com


-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: crypto question

2002-03-20 Thread Mike Brodhead


 The usual good solution is to make a human type in a secret.

Of course, the downside is that the appropriate human must be present
for the system to come up properly.  

In some situations, the system must be able to boot into a working
state.  That way, even if somebody accidentally trips the power-- I've
had this happen on production boxen --the system outage lasts only as
long as the boot time.  If a particular human (or one of a small
number of secret holders) must be involved, then the outage could be
measured in hours rather than minutes.

Don't forget that Availability is also an important aspect of
security.  It all depends on your threat model.

--mkb



-
The Cryptography Mailing List
Unsubscribe by sending unsubscribe cryptography to [EMAIL PROTECTED]



Re: crypto question

2002-03-20 Thread dmolnar



On Thu, 21 Mar 2002, McMeikan, Andrew wrote:

 A question and a probe.

 Question.  Is it possible to have code that contains a private encryption
 key safely?  Every way I look at it the answer seems no, yet some degree of
 safety might be possible by splitting an encrypting routine across several
 nodes.  Can someone give me a pointer to any work in this area?

There are several different possible scenarios which fit this description.
My message will overlap a little with the other reply I've seen, for which
I apologize. Here they are in rough order of what I think you're asking.

1) You are trying to distribute an obfuscated binary which
encrypts/decrypts using a secret key, with the goal that the key resist
reverse engineering. The usual application for this is DRM, but you can
also use this to do public-key encryption from any symmetric algorithm
(obfuscate the encryption function!).

(disclaimer: I work for ShieldIP, which is a DRM company. All statements
and opinions here are my own.)

There's a recent result showing that there exist some functions which
*cannot* be obfuscated, for several technical formalizations of the notion
obfuscated. That result is available as:

On the (Im)possibility of Obfuscating Programs
Boaz Barak Oded Goldreich, Russell Impagliazzo, Steven Rudich, Amit Sahai,
Salil Vadhan, Ke Yang
http://citeseer.nj.nec.com/barak01impossibility.html

It is important to note that this result doesn't necessarily apply to the
kinds of programs we want to obfuscate in practice. Rather it shows that
there is a large class of unobfuscatable functions and builds such
functions through clever means. At least that's my current take; I should
hedge here and say I haven't gone through it thoroughly -- I'd welcome
correction from anyone who's taken more time to map out the practical
implications (for instance, is it possible that a block cipher could be
obfuscated?).

Naturally this result hasn't stopped people from trying practical
techniques for code obfuscation. Cloakware (www.cloakware.com) is just one
of the companies pursuing research into software obfuscation. Doing a
google search for code obfuscation provides many links. I don't know
enough to say which of them are any good.

People have also tried to obtain a similar level of protection by
embedding code in tamper-resistant hardware. IBM's ABYSS project was an
early example of this aimed specifically at copy protection. That begat
Citadel which begat 4758 and thus was the begatting begun. As another
message mentions, Atallah/Compaq/HP and Wave Systems today do similar
things. I note that the Intertrust web page mentions a Rights|Chip which
may or may not do similar things. Bennet Yee's thesis, among other places,
is a good place to learn about secure coprocessors.
ftp://www.cs.ucsd.edu/pub/bsy/pub/th.ps.gz

2) You have an application which uses private keys and you are worried
about writing them to disk. Your adversary is not the user, but someone
who may gain lunch-time access to the machine and not plant keyloggers,
bugs, etc, but only transfers files or swap to a diskette. This is kind of
a weak adversary, but it's also about what most co-workers or kid sisters
can mount, and hey we have to protect at least against them...

The best practice here, AFAIK, is to do what PGP does. Encrypt the private
key while it's on disk using some key not on the machine. Then use a
kernel driver to obtain memory which is guaranteed not to be paged to disk
and use that memory for all sensitive operations. Get yourself a copy of
the WinPGP source code and take a look.

3) You are worried about an adversary breaking in and stealing your own
signing or decryption key from your computer. You also just happen to have
a bunch of other computers lying around that are not running the same OS
or same version (so they are unlikely to be cracked at the same time as
your first machine).

Now you're in the territory of threshold cryptography and proactive
security. The MIT Threshold Cryptography page explains it better than I
could:
http://theory.lcs.mit.edu/~cis/cis-threshold.html

Dan Boneh's group has put some of these ideas into code:
http://theory.stanford.edu/~dabo/ITTC/

With proactive security, you refresh machines from time to time so as
to limit damage from machines which are compromised and then renewed.
Here's the abstract from the paper reporting on the IBM implementation.
http://www.cs.huji.ac.il/~feit/artzi/artzi18.html#abs1
that paper citation is

B. Barak, A. Herzberg, D. Naor, and E. Shai. The proactive security
toolkit and applications. In Proceedings of the 6th ACM Conference on
Computer and Communications Security (CCS'99), pages 18--27, Kent Ridge
Digital Labs, Singapore, November 1999. ACM SIGSAC, ACM

There used to be an IBM page specifically on the topic of proactive
security and they were even going to let people download the toolkit! I
don't think that actually happened. If it did, dude, I'd like to know.

-David Molnar