Re: following on from today's discussion

2006-08-19 Thread Matej Kovacic
A simple example of modifying traffic: 
http://www.schneier.com/blog/archives/2006/08/stealing_free_w.html

http://www.ex-parrot.com/~pete/upside-down-ternet.html

Could be easily applied to Tor exit point too.

However, sniffing is not a problem if you are visiting only public 
webistes (do not exchange any personal information), But traffic 
injection could be.


Remember Penet remailer? They were accused to help distribute child 
pornography. It was not true, and it was proved so later. But Penet 
admin decided to shut down the service anyway because of public preasure.
I am a little worried, that someone will try to destroy Tor network by 
 sniffing, injecting, downloading child pornography/hacking through Tor 
and doing other nasty things...


I was thinking about a solution to prevent traffic injection in 
non-encrypted public websites. What about having TWO conection open and 
do some kind of checking if the content is the same (maybe access the 
content from two different locations and do some MD5 check). I know the 
idea is hard to implement, since website can serve different content for 
each location or every second, and this could also mean double load of 
Tor network. But maybe someone will develop my idea into the usable 
form... If not, feel free to drop it away.


bye, Matej


Re: following on from today's discussion

2006-08-19 Thread Roger Dingledine
On Fri, Aug 18, 2006 at 07:19:58PM -0500, Mike Perry wrote:
 I'd like to also add that it is possible for rogue Tor servers to go
 beyond simply evesdropping on traffic. On one occasion I recieved a
 corrupt .exe file via Tor.. It appeared to be just noise, but it woke
 me up to the possibility that it is quite feasible that Tor exit nodes
 can do all sorts of things to traffic: modifiying .exes, injecting
 browser/media format exploits, etc etc.

Correct. Woe is the day when a malicious Tor exit node also has a stolen
or purchased copy of a trusted CA's key.

But you're right, chaos can be had without even that extreme. This is
why Tor's packages (and other security packages) come with gpg signatures.

The Internet can be a scary place. And Tor users need to be even more
aware of this fact than ordinary Internet users. Part of what we need
to do is educate the world about all the security issues with being
an ordinary user on the Internet. (I don't think Tor introduces any
new attacks -- after all, here I am using my open wireless to get to my
shared cablemodem in my apartment in Cambridge, and I'd better be aware
of all sorts of possible attacks -- but it does change which attacks
you can expect to encounter.)

The next thing we need to do is continue to work on interfaces and
usability for end-user applications like Firefox. What does that lock
mean really? If I do (or don't) see the lock, what should I trust? How
can we make use of the plethora of anti-phishing schemes currently
under research?

And lastly, there's the issue of advocacy for authentication, integrity,
and confidentiality on the Internet in general. Translation: we need to
get everybody using SSL for everything.

 Since the Tor client scrubbs
 logs, it can be difficult to tell which exit server was in fact
 responsible, especially if they only target a small percentage of
 connections.
 It might be nice if Vidalia had an option to retain some connection
 history in-memory only for a period of time on the order of 10s of
 minutes for the purposes of monitoring for malicious/censored exit
 nodes. 

Might be that Blossom is useful for you here (with a few tweaks). Or see
http://archives.seul.org/or/talk/Jul-2006/msg00040.html for more general
options. It's tricky to automate this idea and make it usable because
you'd also have to remember which application connection was involved,
since several different exit nodes can be active at the same time, for
example if they have different exit policies. And that's not something
that is simple to do and still be as safe.

--Roger



Re: A brief response on TRUTHWORTHY

2006-08-19 Thread Fabian Keil
[EMAIL PROTECTED] wrote:

 Fabian Keil [EMAIL PROTECTED] wrote
 
 I don't see the problem here. The option is called AllowInvalidNodes 
 not DoNotOnlyUseTrusworthyNodes.
 
 You can't assume that every node not marked as invalid is trustworthy.

 I notice you snipped away quite a lot of what I wrote and I'd ask you to
 please read some of it again. If you have questions feel free to email
 me direct.

It was my intention to only snip the parts I didn't comment on.

 The term trustworthy comes from the passage in the manual, I didnt
 write it.
 
 http://tor.eff.org/tor-manual.html.en
 
 
 I quote
 
 AllowInvalidNodes entry|exit|middle|introduction|rendezvous|...
 Allow routers that the dirserver operators consider invalid (not
 trustworthy or otherwise not working right) in only these positions in
 your circuits. The default is middle,rendezvous, and other choices are
 not advised.

Yes I know, all I'm saying is that it's a difference if a
node is trustworthy or just not marked as invalid.

As far as I understand it, if a node is not marked as invalid,
it means that the dirserver operators have no reason to believe
that it's untrustworthy or misbehaving. In other words a valid
node isn't guaranteed to be untrustworthy, but it very well might.

After all traffic sniffing nodes can't be detected,
unless they intend to be detected.
 
 The term muster essentially means
 
  to gather together (usually an army or troop) 
  
 So you would muster you men - OK so far?
 
 In this context (tor) it just means anyone who can put together a
 server. It has no connotation on that servers ability for or against its
 accessing any keys or its ability at all. Im afraid you took a wrong
 turn there, sorry.

You're right, I thought muster was another word for check.
 
 As for the Levels1..4 pushing folk away - on the contrary, everyone at
 the moment would slot into one or other of these categories. Just that
 some might not want to or get to the upper levels. There would be no
 loss of servers just the ability for the user to choose which level of
 security they prefer. Thats democratic yes/no?

But unless I failed to get it again, your levels would require
the checks you mentioned, so I think most of my concerns still apply?

BTW: instead of levels I'd prefer more optional labels.

What I'd like to have would be a Tor server option
that clarifies if a Tor server provides unmodified
content. Similar like the myfamily feature
it would be only used by the good guys, but I still
think it would be useful.

I really hate it when Tor servers are chained with squid
and provide modified content without letting me know.
Some of them even provide default addresses for failed
DNS requests, instead of delivering a decent error message.

I think most of these Tor server operators have no
wrong intentions and wouldn't mind using such an option
if it existed. I would be glad if I could easily exclude
them.

As far as I remember, the Tor documentation doesn't
say anything about if it's right or wrong to chain
Tor servers with content modifying proxies, but I for
one think it's a bad idea.

  Of course you can still use your cryptic keys, if you want to, just
 like the internet uses ip addresses today. But for many internal torland
 websites, a userfriendly URL like alternative, supported by something
 akin to a torlandDNS system, would be an advantage to get the average
 man/woman in the street interested.

As far as I understood, you intended to have some of your proposed
Tor server levels include a check of the Tor server. Is that correct?

I'm just saying that this check would weaken the Tor server's
security, because the person checking the server would get
hold of the server's private key.

 You know, you could always argue to do nothing, never create a Tor
 network, never use Tor, never encrypt, never invent guard nodes etc.
 
 Its easy - just think of the exteme case when these defenses dont work
 and reason its not worth bothering to do in the first place.

I just think your proposed Tor server levels would not only
fail in extreme case, but next to always.
 
 But we dont - or at least not all of us do!
 
 The thing about security is like anything in life - its an uphill
 struggle.
 
 Always changing, always getting more difficult, as your adversary gets
 better.
 
 Really just like LIFE and EVOLUTION, just like living viruses and
 bacterial adaptation to drugs etc.
 
 Everytime you develop something, some monkey with a wrench comes along
 and makes all your efforts as nothing.
 
 
 The ONLY way to stay on top of this is to get out there and do something!

Hopefully something that changes the situation, I just
don't see how your proposed Tor server levels would gain anything. 
 
 We ALL know this - its our natural instinct, survival.
 
 So to keep these flood servers at bay we need to erect barriers, hence
 my Levels1..4.
 
 OK some Agent Blacks may be able to pass themselves off as home nodes
 but how many and will the tor community 

Re: A brief response on TRUTHWORTHY

2006-08-19 Thread maillist
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA1


 I really hate it when Tor servers are chained with squid
 and provide modified content without letting me know.
 Some of them even provide default addresses for failed
 DNS requests, instead of delivering a decent error message.
 
 I think most of these Tor server operators have no
 wrong intentions and wouldn't mind using such an option
 if it existed. I would be glad if I could easily exclude
 them.

I'm running Tor server chained with squid to save valuable bandwidth. It
saves about 2GB per day. No content is modified, only some error
messages by Squid (host not found etc.) which is default behavior. I
dont see anything wrong with that, correct me if I'm wrong.

Many ISP:s transparently redirect http traffic to Squid to save their
bandwidth.

Many websites provide (sadly) provide different content depending your
toplevel domain.

Do you have any examples of content that has been modified by tor server
chained with proxy? I'm intrested.

BTW: my tor server is SpongeBob.

M

-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.4 (MingW32)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iD8DBQFE5vOL6fSN8IKlpYoRAh8wAKCMhcxtR/Uv92CIOKs50kiAlRj/sQCdGynv
JQI6sCxZzD02AgO0kX/T2rU=
=b4EZ
-END PGP SIGNATURE-



Re: Exit Node sniffing solution...an idea...

2006-08-19 Thread Marco A. Calamari
On Fri, 2006-08-18 at 20:49 -0700, Anothony Georgeo wrote:
 Hi,
 
 I have been thinking about the issue of exit node
 operators and/or adversaries sniffing clear-text
 ingress/egress traffic locally and/or remotly on an
 exit node.  I have a possible solution but I would
 like the Tor devs. and experts here to weigh-in. 

In this thread I saw no mention of partitioning attack
 problems.

In the past the Mixnet networks that allow several
 visible parameter to be chosen by the user (i.e. Mixmaster)
 was considered vulnerable to partitioning attack,
 that can make traffic analisys easier, lowering
 the anonimity set. The parameters can be some non-default
 remailer properties, some fixed user-chosen chains, or the
 location diversity in Autonomous System domain.

I read that is generally agreed that traffic analisys
 is the main road to attack low-latency systems like Tor.

It is possible that give a lot of hand-configurable
 parameter and stressing the necessity of a personal,
 smart choice (exit router enter router forbidden router
 and so on) this can make traffic analisys a lot easier
 respect a no-user-configurable-parameters situation ?

JM2C

Ciao.   Marco
-- 

+--- http://www.winstonsmith.info ---+
| il Progetto Winston Smith: scolleghiamo il Grande Fratello |
| the Winston Smith Project: unplug the Big Brother  |
| Marco A. Calamari [EMAIL PROTECTED]  http://www.marcoc.it   |
| DSS/DH:  8F3E 5BAE 906F B416 9242 1C10 8661 24A9 BFCE 822B |
+ PGP RSA: ED84 3839 6C4D 3FFE 389F 209E 3128 5698 --+



signature.asc
Description: This is a digitally signed message part


Tor servers chained with proxies (was: A brief response on TRUTHWORTHY)

2006-08-19 Thread Fabian Keil
maillist [EMAIL PROTECTED] wrote:

  I really hate it when Tor servers are chained with squid
  and provide modified content without letting me know.
  Some of them even provide default addresses for failed
  DNS requests, instead of delivering a decent error message.
  
  I think most of these Tor server operators have no
  wrong intentions and wouldn't mind using such an option
  if it existed. I would be glad if I could easily exclude
  them.
 
 I'm running Tor server chained with squid to save valuable bandwidth. It
 saves about 2GB per day. No content is modified, only some error
 messages by Squid (host not found etc.) which is default behavior. I
 dont see anything wrong with that, correct me if I'm wrong.

If they are setup correctly I don't see anything wrong with proxies
either. But I expect them not to provides outdated or modified content,
not to mess with error messages and not to cache if the servers
asked them not to.

If I type an invalid address into my browser, I want to see the error
message of my local running Privoxy, not some message send by a
transparent proxy.

I assume that most default proxy configurations are as
broken as most default browser configurations, therefore
I'd rather exclude all Tor nodes that are chained with a
proxy, unless I know for sure that they really work
transparent.

 Many ISP:s transparently redirect http traffic to Squid to save their
 bandwidth.

If I was aware that my Tor server had no clean
web connection, I would block port 80.

 Many websites provide (sadly) provide different content depending your
 toplevel domain.

Not always to the Tor user's disadvantage. For example
Tor allows me to order RC-1 DVDs from web shops that
aren't supposed to ship them to Germany, but don't
mind if the IP check is circumvented.
 
 Do you have any examples of content that has been modified by tor server
 chained with proxy? I'm intrested.

From time to time I get custom headers that were added by squid
or some other proxy and aren't set by the original web servers.
Some people might not care, I do.

Or I mistype a URL and instead of getting a real error message 
I'm redirected to another site. In fact redirect isn't the
right word, because there is no 302 or 301 status code, but 200.

Sometimes it's a custom error message send with the wrong
status code, sometimes it's just another page with no content
that I'm interested in (this happend only a few times so far).

As I said before I don't think it's the result of bad intentions,
so labels like HTTPProxy, ModifiedContent, ModifiedHTTPBodies
and ModifiedHTTPHeaders could help.
 
 BTW: my tor server is SpongeBob.

Thanks.

Fabian
-- 
http://www.fabiankeil.de/


signature.asc
Description: PGP signature


Quick Concerns re: Traffic injection + Tor Control port

2006-08-19 Thread Freemor
Hi All,

  The Recent talk about an exit nodes ability to inject/modify traffic
got me thinking that this might actually pose a security risk not just
for users but possibly for the tor network in general.

   My thinking went along the lines that it would be fairly trivial for
someone experienced in Java or other scripting languages to inject a
script that would internally scan the receiving computer for an open Tor
control port (read open and non-authenticated) once that was found the
script could easily and completely hose up the security of that node by
messing with the routing/exitnodes/ControlListenAddress/etc. Bad enough
if it is a client tor installation, Tragic is it is an entry/middle/exit
node as well.

   I think the simple solution would be to require authentication if the
Tor control port is open.

   Now just to make a few things clear. 

   I do understand that allowing any scripting when using Tor is
hideously bad, but judging from all the clear text passwords going
through others may not.

   I most certainly am not a Java/Java Script wizard and so could be way
off on the triviality of such a program.

   My intent in posting this was not to criticize but to stimulate
intelligent conversation on what I perceive as a possible risk to the
Tor network.



--

Freemor [EMAIL PROTECTED]
Freemor [EMAIL PROTECTED]

This e-mail has been digitally signed with GnuPG




signature.asc
Description: This is a digitally signed message part


Re: following on from today's discussion

2006-08-19 Thread Mike Perry
Thus spake Roger Dingledine ([EMAIL PROTECTED]):

 Correct. Woe is the day when a malicious Tor exit node also has a stolen
 or purchased copy of a trusted CA's key.

Eeep.

 The next thing we need to do is continue to work on interfaces and
 usability for end-user applications like Firefox. What does that
 lock mean really? If I do (or don't) see the lock, what should I
 trust?  How can we make use of the plethora of anti-phishing schemes
 currently under research?
 
 And lastly, there's the issue of advocacy for authentication,
 integrity, and confidentiality on the Internet in general.
 Translation: we need to get everybody using SSL for everything.

Time for a nice tinfoil-amplified SSL rant..

Is anyone in the world actively watching and tracking SSL certs beyond
simply verifying CA key signatures?  By looking at teh OCSP RFC
(http://rfc.sunsite.dk/rfc/rfc2560.html) it appears as if you are hard
pressed to tell if a cert is a dup or not:

'The good state indicates a positive response to the status inquiry.
At a minimum, this positive response indicates that the certificate is
not revoked, but does not necessarily mean that the certificate was
ever issued or that the time at which the response was produced is
within the certificate's validity interval.'

I mean good goddess. So even if you are watching for revokations, you
are only handling half of the SSL threat model... Some form of
ssh-like fingerprint tracking really needs to be coupled with
CRL-style checks so that you only accept a different cert than normal
for citibank.com if a revocation has been actually issued by them.
Especially when we have over 100 root certs spanning multiple
countries trusted by most browsers now.

To add insult to injury, the only public OCSP server I can find seems
completely broken. Everything comes back with 'unknown' with bad
timestamps. Yes, even their demo key.

http://www.openvalidation.org/en/info/openssl.html


This client seems to be somehow issuing correct queries to verisign's
OCSP according to ethereal (even though it is configured to use
openvalidation.org), but the UI reports the same 'unknown' status as
'openssl ocsp' did:

http://www.openvalidation.org/ValWorks.html

Madness. 

-- 
Mike Perry
Mad Computer Scientist
fscked.org evil labs


Re: following on from today's discussion

2006-08-19 Thread Mike Perry
Thus spake Matej Kovacic ([EMAIL PROTECTED]):

 I was thinking about a solution to prevent traffic injection in 
 non-encrypted public websites. What about having TWO conection open and 
 do some kind of checking if the content is the same (maybe access the 
 content from two different locations and do some MD5 check). I know the 
 idea is hard to implement, since website can serve different content for 
 each location or every second, and this could also mean double load of 
 Tor network. But maybe someone will develop my idea into the usable 
 form... If not, feel free to drop it away.

So what about a stochastic solution instead:

1. Create some listing of exe files, commonly vulnerable doc formats, 
   and SSL sites that changes periodically, possibly scraped off google
2. Use some perl glue to go through the Tor node list and try each exit
   to make sure they aren't modifying this data.
   a. Certs can be checked byte by byte to make sure they don't differ
  across exit nodes.
   b. Images, doc files, ppt files, exes can be verified by multiple
  sources

A handful of hosts could run this thing and publish their results,
perhaps along with some other manually created list of undesirable
exits.

I think this is doable with perl, the Tor control port, wget, md5sum,
tsocks and 'openssl s_client', and is a lot more efficient than having
everyone verify everything always. The testing can be periodic, can
manually associate streams with connections so exits are known, etc.

If I'm not distracted by something shiny in the next couple days I'll
give it a shot. I mean, we've got to get these motherfuckin snakes off
this motherfuckin plane.


-- 
Mike Perry
Mad Computer Scientist
fscked.org evil labs