Re: [Fwd: BugTraq - how to coverup the security]

2003-07-15 Thread Ian Grigg
Sean,

I apologise for the snippety email last night,
I obviously missed the point completely!

Sean Smith wrote:
> 
> > > Are other platforms more secure or do they just receive
> > > less scrutiny?  Or is it that Microsoft does not react quickly to
> > > found bugs? .
> 
> My point was just that the browser paradigm was not really designed with the
> idea of making the security status information always clearly distinguishable
> from the content provided by malicious servers.
> 
> In our project, we'd looked at popular browser/OS combinations (two years ago),
> and found that (with some cleverness) you could produce fairly convincing
> impersonations in many scenarios. The barriers were repeatedly permeable. E.g.,
> does the browser mark your popup window with a label that spoils the spoof? No
> problem: just send an image of the window instead.
> 
> As has been mentioned on this list before, we also designed and implemented a
> trusted path solution in Mozilla. (But this was complicated by the fact that
> each new release of Mozilla seemed to break our code :)

That is significant!  Was this code not
folded back into Mozilla?

> > The question at hand is this:  if secure browsing
> > is meant to be secure, but the security is so easy
> > to bypass, why are we bothering to secure it?
> >
> > Or, if we should bother to secure it, shouldn't
> > we mandate the security model as applying to the
> > browser as well?
> 
> Exactly.
> 
> That was the whole point of our Usenix paper last year
> 
> E. Ye, S.W. Smith.
> ``Trusted Paths for Browsers.''
> 11th Usenix Security Symposium. August 2002
> http://www.cs.dartmouth.edu/~sws/papers/usenix02.pdf

Oh, my!!  That is a significant effort.
>From what I can see, you actually built
a browser with a security model, and
*tested* it against users.

That implies a *validated* security model
built against realised and known threats.

That's pretty unique!

I've only skimmed it so far, but it looks
like you are well ahead of us here.  I'm
curious to hear how successful you have
been convincing the Mozilla people to
adopt this?

-- 
iang

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


httpsy, SSH and eternal resource locator/WAX (Re: Announcing httpsy://, a YURL scheme)

2003-07-15 Thread Adam Back
I'm not that familiar with SFS, but httpsy sounds quite related to
Anderson, Matyas and Peticolas' "eternal resource locator" [1], and
the WAX system they describe in that paper.  This scheme allows a
referer to embody in a URL they refer to authentciation information
about the contents of the text in the body of the page referred to
(either by SHA1 document hash, or by reference to a signing key the
publisher of the referred page may use to sign and update that page's
contents).  (WAX was also implemented in browsers if I remember from
earlier reading of that paper).

Their approach is more directly worried about the risks in pointing at
random stuff on the web, and have it change under you.  For example I
had a pointer to a python implementation of hashcash, and the domain
of the author's ISP got sold and now it's a porn site, so where people
were expecting some python library code they would have got bounced to
some porn site.

They were also worried about referring to specific vetted instances of
a _version_ of a web page.  (The application was refereed web pages
with medical reference information).

httpsy seems to content itself doing something similar but based
solely on the identity (akin to the signature only variant in WAX)
where there is no guarantee about the content of the referred page.

>From what I could understand from the httpsy pages it sounds like
there is a use case where you get redirected from a book purchase site
(eg amazon.com) to a book review site and then back from the reviewer
to the book site.  And the claimed weakness with SSL is that a rogue
book reviewer site could redirect you to a different though also
certified site.  Additionally the use-case supposes that the attacker
had gone to the trouble of getting a cert for a similar domain name
(eg amaz0n.com (zero instead of o).

httpsy seems to claim that instead of showing you the hash of the
sites auth information (which Perry referred to), it will instead give
you the option to provide a pet name for that site.  (eg you put "BOOK
SITE" or "AMAZON" or whatever is mnemonic for you as an individual),
then if you get bounced to the wrong place you'll be suprised that
amaz0n.com is not listed by your pet name but is instead prompting you
to check the hash and supply your pet name.  (In a similar way to the
way SSH warns you if the host key changes for a site your connecting
to again, or if you accidentally connect to a similarly named but SSH
supporting site with a host key not already in SSH's known hosts file).

So the httpsy proposal is really quite similar to SSH, but with pet
names.

Also I'm not sure what is special about pet names or introducers.  All
that will happen to my mind is that people will set up informative but
bogus meta-rating sites.  (Best bookseller "amaz0n.com" plus the rogue
amaz0n.com's auth data hash.)  And then again the user will end up
giving their credit card to the rogue site.  Different to the SSL
attack but I'm not sure it's overally cleanly solved this kind of
human semantic gap attack, or even necessarily improved the situation
over SSL.

One thing it does do, which is perhaps good, is avoid the central
trusted point.  (Imagine if SSH used verisign as a CA.  If you became
the target of some investigation and verisign (or one of the other 50
odd CA vendors) complied with the LEAs, they could trick someone into
SSHing into a honey pot instead of the real host they indended to
reach).  By using potentially out-of-band (emailed PGP signed host-key
or user-key) but sticky (via the known-hosts mechanism) SSH avoids
that particular central trust issue and also (which contributes
greatly to SSH's success if you ask me) this simplifies setup as you
don't need to pay money to verisign et al to setup a host for SSH
access.

Adam

[1] "The eternal resource locator: an alternative means of
establishing trust on the World Wide Web", Ross Anderson, Vaclav
Matyas, Fabien Petitcolas, 3rd USENIX workshop on electronic commerce
Augst 1998

http://www.cl.cam.ac.uk/~fapp2/papers/ec98-erl.pdf

On Tue, Jul 15, 2003 at 09:06:02AM -0400, Zooko wrote:
> 
> Tyler should probably reference SFS on his HTTPSY pages.  Here's a good paper 
> focussed specifically on this issue.
> 
> http://citeseer.nj.nec.com/mazieres99separating.html
> 
> Although I haven't looked closely at HTTPSY yet, I'm pretty sure that it 
> simply applies to the Web the same notion that SFS applies to remote 
> filesystems.
> 
> It is an excellent idea.
> 
> Regards,
> 
> Zooko
> 
> 
> -
> The Cryptography Mailing List
> Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Announcing httpsy://, a YURL scheme

2003-07-15 Thread sayke
At 09:21 AM 7/15/2003 -0400, someone with the password to 
[EMAIL PROTECTED] wrote:

SFS makes it practically impossible to do key updates, and the trust
model is rather flawed -- if you mount files from one site you in
practice end up trusting it totally, which means that it can hand you
links to spoofed other sites and you'll in practice totally believe
them unless you're paying very close attention and have the ability to
perfectly recognize long hashes by eye. It is a neat idea, and
certainly instructive, but I don't know that I particularly love it.
i think the difference between sfs and yurl lies in the yurl 
scheme's use of pet names to make long hashes easier to remember. while 
this seems like a promising approach, the thought of typing in a new pet 
name every time i visit a new domain (or mount a new volume via nfs) looks 
like too high of a burden, interface-wise, on users in general.
perhaps if i could occasionally download (and authenticate with a 
[pet_name, hash] pair) pre-digested lists of such pairs from opennic or the 
eff etc, i might feel more inclined to use the system... this opens the 
possibility of multiple coexisting global namespaces, and raises ye olde' 
"who do you trust" question...
perhaps we might as well design things that use [global_name, 
ip_address, pubkey_fingerprint, pet_name] sets, and just get it over with =D

sayke, v3.0
/*
"Do not imagine that Art is something designed to give gentle uplift and 
self-confidence. Art is not a brassiere. At least, not in the English 
sense. But do not forget that brassiere is French for life-jacket." -- 
Julian Barnes, Floubert's Parrot (1984)
*/

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Announcing httpsy://, a YURL scheme

2003-07-15 Thread Ed Gerck


Tyler Close wrote:

> Please read the provided documentation.
> ...

This is what your documentation says about key revocation:

 "When using YURLs, sysadmins can shorten the lifetime of a
  certificate, change keys more frequently, and thus reduce
  their site's vulnerability to identity theft. Keys could even be
  changed at a frequency that would enable the site to forgo
  certificate revocation and Certificate Revocation Lists (CRLs).

Really? What prevents the attacker from having a rogue site
with the stolen key if there is nowhere to verify whether the
key is valid or not?

>From your other URLs, I also read:

 "A YURL MUST provide all the information required to
 authenticate the target site. Authentication of the target
 site MUST ONLY rely on information contained in the
 YURL."

The YURL is the single point of control and that is a problem,
not a solution. The YURL must also be recognized as a single
point of failure -- i.e., no matter how trustworthy that single point
of control is, it may fail or be compromised and there is no recourse
available because it is the single point of control.

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Will 'Distributed Cloud' Network Structures Prevent Censorship?

2003-07-15 Thread Steve Schear
Bennett Haselton believes that de-centralized information storage and 
transmission systems - so called 'Distributed Cloud' networks like 
Peekabooty, 
FreeNet 
and 
Gnutella 
- will not prevent Internet censorship in the long run. He has written a 
short essay http://www.peacefire.org/techpapers/distributed-cloud.html 
pointing out the flaws he perceives in these systems.

Ian Clarke, the architect of the FreeNet project, responds here 
http://slashdot.org/~Sanity/journal/37275, defending Distributed Cloud 
systems and their abilities to prevent censorship, protect the identities 
of their users, and shield users from legal liability.

steve

"There is no protection or safety in anticipatory servility."
Craig Spencer
-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Announcing httpsy://, a YURL scheme

2003-07-15 Thread Ed Gerck


Ben Laurie wrote:

> Ed Gerck wrote:
>
> >From your URLs:
> >
> > "The browser verifies that the fingerprint in the URL matches the public key 
> > provided by the visited site. Certificates and Certificate Authorities are 
> > unnecessary. "
> >
> > Spoofing? Man-in-the-middle? Revocation?
> >
> > Also, in general, we find that one reference is not enough to induce trust. 
> > Self-references
> > cannot induce trust, either (Trust me!). Thus, it is misleading to let the 
> > introducer
> > determine the message target, in what you call the "y-property". Spoofing and
> > MITM become quite easy to do if you trust an introducer to tell you where to go.
>
> BTW, tell me how you do spoofing and MITM if you aren't the trusted
> introducer (if you are, clearly there's no need to spoof or MITM,
> because you can just give the target of your choice)?

My point exactly. Trust can also be seen as that which can break your system.
By believing in *one* trusted introducer, a single source of information, a single
trusted source, you have no correction channel available.  One of the earliest
references to this principle can be found some five hundred years ago in the Hindu
governments of the Mogul period, who are known to have used at least three
parallel reporting channels to survey their provinces with some degree of reliability, 
notwithstanding the additional efforts. More in http://nma.com/papers/e2e-security.htm

Cheers,
Ed Gerck


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: [Fwd: BugTraq - how to coverup the security]

2003-07-15 Thread Bill Frantz
At 5:51 PM -0700 7/14/03, Sean Smith wrote:
>If you don't design a trusted path into the system, why should
>you expect there to be one?

The idea of "trusted path" seems to have been lost in history.  Both Redhat
Linux and Macintosh System X have the worrisome habit of asking you for
your administrator password (root password in the case of Redhat) as part
of their online system update procedure.  It seems to me that any program
could pop up such a dialog, and it wouldn't look any different.

Back in the old days, flipping the online/offline switch on a 3270 terminal
would cause VM/370 to disconnect the currently logged on user and display
the logon screen.  KeyKOS uses the "SysReq" key for the same purpose.
Trusted path was an Orange Book requirement.  What happened?

Cheers - Bill


-
Bill Frantz   | "A Jobless Recovery is | Periwinkle -- Consulting
(408)356-8506 | like a Breadless Sand- | 16345 Englewood Ave.
[EMAIL PROTECTED] | wich." -- Steve Schear | Los Gatos, CA 95032, USA



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Announcing httpsy://, a YURL scheme

2003-07-15 Thread Ed Gerck
Ben Laurie wrote:

> Ed Gerck wrote:
> > Also, in general, we find that one reference is not enough to induce trust. 
> > Self-references
> > cannot induce trust, either (Trust me!). Thus, it is misleading to let the 
> > introducer
> > determine the message target, in what you call the "y-property". Spoofing and
> > MITM become quite easy to do if you trust an introducer to tell you where to go.
>
> What is a CA other than an introducer?

Maybe that's why CAs are still around...they do not tell you where to go. Instead,
there are two assertions that a CA should deliver in a certificate according to X.509:

(i) that the subject’s public-key has a working private-key counterpart somewhere, and

(ii) that the subject’s DN is unique to that CA.

These assertions should also be delivered without content disclaimers but are limited
in scope by the CPS. In addition, in both cases caveats apply. For example, in (a),
there are no warranties that the public/private key pair is not artifically weakened,
that the private key is actually in the possession of the named subject and that no
one else has obtained a copy of the private key. In (b), there are no warranties that
such DN contains the actual subject’s name, location or that the subject even exists
or has a correctly spelled name.

(From Overview of Certification Systems, E. Gerck, 1997, copy
at  http://www.thebell.net/papers/certover.pdf )

Cheers,
Ed Gerck



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Information-Theoretic Analysis of Information Hiding

2003-07-15 Thread David Honig
At 12:30 AM 7/15/03 -0400, Don Davis wrote:
>"An electrical engineer at Washington University
> in St. Louis has devised a theory that sets the
> limits for the amount of data that can be hidden
> in a system and then provides guidelines for how
> to store data and decode it. Contrarily, the
> theory also provides guidelines for how an
> adversary would disrupt the hidden information.

"But the theory answers the questions, what is the optimal attack.."

There are ways of preventing any modification (attack) of the
carrier.  E.g., sign the carrier (with the private
half of a widely published public key).  Although this
technique would attract attention until widespread.

Note that Disney has to do this as well as Osama,
lest someone post Disney content, with the "not ok to copy freely"
watermark mutated.  Otherwise a downloader would
protest, "but the file said it was free, and the
included-file-hash said it was intact!"  (Because
the mutator also provided a new hash.)

(Disney's situation is worse, of course, because 
even the pristine, Disney-signed content is copyable at the
analog (etc) level.  And Osama can use multiple images as carriers
for a single message.)






-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: [Fwd: BugTraq - how to coverup the security]

2003-07-15 Thread Sean Smith

> > Are other platforms more secure or do they just receive
> > less scrutiny?  Or is it that Microsoft does not react quickly to
> > found bugs? .

My point was just that the browser paradigm was not really designed with the
idea of making the security status information always clearly distinguishable
from the content provided by malicious servers.

In our project, we'd looked at popular browser/OS combinations (two years ago),
and found that (with some cleverness) you could produce fairly convincing
impersonations in many scenarios. The barriers were repeatedly permeable. E.g.,
does the browser mark your popup window with a label that spoils the spoof? No
problem: just send an image of the window instead.

As has been mentioned on this list before, we also designed and implemented a
trusted path solution in Mozilla. (But this was complicated by the fact that
each new release of Mozilla seemed to break our code :)

> The question at hand is this:  if secure browsing
> is meant to be secure, but the security is so easy
> to bypass, why are we bothering to secure it?
> 
> Or, if we should bother to secure it, shouldn't
> we mandate the security model as applying to the
> browser as well?

Exactly.

That was the whole point of our Usenix paper last year

E. Ye, S.W. Smith.
``Trusted Paths for Browsers.''
11th Usenix Security Symposium. August 2002 
http://www.cs.dartmouth.edu/~sws/papers/usenix02.pdf

---Sean
-- 
Sean W. Smith, Ph.D. [EMAIL PROTECTED]   
http://www.cs.dartmouth.edu/~sws/   (has ssl link to pgp key)
Department of Computer Science, Dartmouth College, Hanover NH USA




-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Announcing httpsy://, a YURL scheme

2003-07-15 Thread Perry E. Metzger

"Zooko" <[EMAIL PROTECTED]> writes:
> Although I haven't looked closely at HTTPSY yet, I'm pretty sure that it 
> simply applies to the Web the same notion that SFS applies to remote 
> filesystems.
> 
> It is an excellent idea.

SFS makes it practically impossible to do key updates, and the trust
model is rather flawed -- if you mount files from one site you in
practice end up trusting it totally, which means that it can hand you
links to spoofed other sites and you'll in practice totally believe
them unless you're paying very close attention and have the ability to
perfectly recognize long hashes by eye. It is a neat idea, and
certainly instructive, but I don't know that I particularly love it.

The "YURL" idea seems to suffer from most of the same flaws.

-- 
Perry E. Metzger[EMAIL PROTECTED]

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Announcing httpsy://, a YURL scheme

2003-07-15 Thread Zooko

Tyler should probably reference SFS on his HTTPSY pages.  Here's a good paper 
focussed specifically on this issue.

http://citeseer.nj.nec.com/mazieres99separating.html

Although I haven't looked closely at HTTPSY yet, I'm pretty sure that it 
simply applies to the Web the same notion that SFS applies to remote 
filesystems.

It is an excellent idea.

Regards,

Zooko


-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Announcing httpsy://, a YURL scheme

2003-07-15 Thread Tyler Close
On Monday 14 July 2003 21:27, Perry E. Metzger wrote:
> Tyler Close <[EMAIL PROTECTED]> writes:
> > The security properties enforced by a YURL implementation are
> > clearly defined at:
> >
> > http://www.waterken.com/dev/YURL/Definition/
>
> I'm afraid they aren't clearly defined at all. I've read the page, and
> I must admit that as peripherally interesting as it might be, for
> example, for you to introduce us to the sociologist Mark Granovetter's
> work on diagrams, etc., and as nice as it is for you to have lots of
> references listed, you've not explained your threat model in a way
> that I readily understand.

I am happy to describe the YURL security model in your preferred
documentation format.  Ideally, if you have a link for a
description of the HTTPS security model that you find acceptable,
I will try to mimic it for HTTPSY.

Thank you,
Tyler

-- 
The union of REST and capability-based security:
http://www.waterken.com/dev/Web/

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Announcing httpsy://, a YURL scheme

2003-07-15 Thread Ben Laurie
Ed Gerck wrote:

>>From your URLs:
> 
> "The browser verifies that the fingerprint in the URL matches the public key 
> provided by the visited site. Certificates and Certificate Authorities are 
> unnecessary. "
> 
> Spoofing? Man-in-the-middle? Revocation?
> 
> Also, in general, we find that one reference is not enough to induce trust. 
> Self-references
> cannot induce trust, either (Trust me!). Thus, it is misleading to let the introducer
> determine the message target, in what you call the "y-property". Spoofing and
> MITM become quite easy to do if you trust an introducer to tell you where to go.

BTW, tell me how you do spoofing and MITM if you aren't the trusted
introducer (if you are, clearly there's no need to spoof or MITM,
because you can just give the target of your choice)?

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/

"There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit." - Robert Woodruff



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Re: Announcing httpsy://, a YURL scheme

2003-07-15 Thread Ben Laurie
Ed Gerck wrote:

>>From your URLs:
> 
> "The browser verifies that the fingerprint in the URL matches the public key 
> provided by the visited site. Certificates and Certificate Authorities are 
> unnecessary. "
> 
> Spoofing? Man-in-the-middle? Revocation?
> 
> Also, in general, we find that one reference is not enough to induce trust. 
> Self-references
> cannot induce trust, either (Trust me!). Thus, it is misleading to let the introducer
> determine the message target, in what you call the "y-property". Spoofing and
> MITM become quite easy to do if you trust an introducer to tell you where to go.

What is a CA other than an introducer?

Cheers,

Ben.

-- 
http://www.apache-ssl.org/ben.html   http://www.thebunker.net/

"There is no limit to what a man can do or how far he can go if he
doesn't mind who gets the credit." - Robert Woodruff



-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]


Information-Theoretic Analysis of Information Hiding

2003-07-15 Thread Don Davis
"An electrical engineer at Washington University
 in St. Louis has devised a theory that sets the
 limits for the amount of data that can be hidden
 in a system and then provides guidelines for how
 to store data and decode it. Contrarily, the
 theory also provides guidelines for how an
 adversary would disrupt the hidden information.

"The theory is a fundamental and broad-reaching
 advance in information and communication systems
 that eventually will be implemented in commerce
 and numerous homeland security applications --
 from detecting forgery to intercepting and
 interpreting messages sent between terrorists.

"Using elements of game, communication and
 optimization theories, Jody O'Sullivan, Ph.D.,
 professor of electrical engineering at Washington
 University in St. Louis, and his former graduate
 student, Pierre Moulin, Ph.D., now at the
 University of Illinois, have determined the
 fundamental limits on the amount of information
 that can be reliably hidden in a broad class of
 data or information-hiding problems, whether they
 are in visual, audio or print media."

http://www.eurekalert.org/pub_releases/2003-07/wuis-tch071303.php


---


Information--Theoretic Analysis of Information Hiding

by Pierre Moulin and Joseph A. O'Sullivan

Abstract:

"An information--theoretic analysis of information
 hiding is presented in this paper, forming the
 theoretical basis for design of information--hiding
 systems.  Information hiding is an emerging research
 area which encompasses applications such as copyright
 protection for digital media, watermarking, finger-
 printing, steganography, and data embedding.  In these
 applications, information is hidden within a host data
 set and is to be reliably communicated to a receiver.
 The host data set is intentionally corrupted, but in
 a covert way, designed to be imperceptible to a casual
 analysis.  Next, an attacker may seek to destroy this
 hidden information, and for this purpose, introduce
 additional distortion to the data set.  Side information
 (in the form of cryptographic keys and/or information
 about the host signal) may be available to the information
 hider and to the decoder.

"We formalize these notions and evaluate the {\em hiding
 capacity}, which upper--bounds the rates of reliable
 transmission and quantifies the fundamental tradeoff
 between three quantities: the achievable information--
 hiding rates and the allowed distortion levels for the
 information hider and the attacker.  The hiding capacity
 is the value of a game between the information hider
 and the attacker.  The optimal attack strategy is the
 solution of a particular rate-distortion problem, and
 the optimal hiding strategy is the solution to a channel
 coding problem.  The hiding capacity is derived by
 extending the Gel'fand-Pinsker theory of communication
 with side information at the encoder.  The extensions
 include the presence of distortion constraints, side
 information at the decoder, and unknown communication
 channel.  Explicit formulas for capacity are given in
 several cases, including Bernoulli and Gaussian problems,
 as well as the important special case of small distortions.
 In some cases, including the last two above, the hiding
 capacity is the same whether or not the decoder knows
 the host data set.  It is shown that many existing
 information--hiding systems in the literature operate
 far below capacity." 

Sept. '02 version of the paper:

http://www.ifp.uiuc.edu/~moulin/Papers/IThiding99r.ps.gz

-
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to [EMAIL PROTECTED]