(i know that at least jake and ian understand all the nuances here, probably 
better than me.)

bus still, i would like you to consider, for a moment, this question:

suppose there were a service that intentionally wanted to protect recipients of 
communications
from malicious traffic?   when i was at $big_provider, i spent an awful lot of 
time and energy
communicating with colleagues and sharing threat intelligence about bad guys.

i.e. accumulating reputation information about the counterparties.

any mechanism to do this (that i could think of, anyway) presents a possible 
risk to
those communicants who want no attributable state saved about their 
communication.
either these are privacy freaks (not intended pejoratively:  for whatever 
reason, they're 
entitled to be…) …  or criminals.

it's really hard to engineer systems that will satisfy the needs of privacy 
freaks while still 
protecting the naive, and not at the same time equip criminal enterprises.  
most of us 
seem to be willing to engineer to trust ourselves (the operators of the 
facility) to have 
good taste in protecting all but the criminals.  only a few of us  are willing 
to go as far as 
"you can trust us because you don't have to".

i still believe microsoft is trying to do the right thing here for 99*% of 
their users, 
but they can't help but get slammed because they haven't been crystal clear 
about
it, hiding the activity with weasel words and legalese in their TOS.  i also 
agree that
relying on an old and inapplicable security review would be a deceptive 
practice.

i agree with ian that telling people what your system does so they can manage 
their own
risks (transparency) is a good middle ground.  (but it also enables criminals 
to know how 
to avoid detection, not a society good).

(so now we all know, skype is not suitable for privacy freaks or criminals!  
woo hoo.)

(btw, keep in mind that any hosting provider can inspect hosted web content on 
their backends, 
which would show nothing in web access logs.  their TOS doubtless permits that. 
 there 
is nothing that i know of that requires your hosted content or your site 
activity to not be looked at
by your provider, unless stored communication is involved, and even then there 
are provider
exceptions such as for malware and AV scanning.)  

a few other comments interlineated.

On May 20, 2013, at 7:55 AM, Jacob Appelbaum <ja...@appelbaum.net> wrote:

> Mark Seiden:
>> i think we are having a misunderstanding here.
>> 
>> any sort of opt-in or opt out doesn't work in the account takeover scenario, 
>> which is 
>> very common these days.
>> 
>> the bad guy will always have a relationship through the buddy list, which is 
>> exactly
>> why they are using taken over accounts.
>> 
>> the situation you are "imagining" is the way it was prior to the rash of 
>> account takeovers,
>> and they way it might be if accounts could not be taken over easily (e.g. if 
>> they used 
>> 2 factor or some other way of knowing the customer was authentic).
>> 
> 
> Indeed.
> 
> It also depends entirely on the end user software. Often it is possible
> that there are two users with the same name but with different
> identifiers. This also doesn't stop people from registering domains that
> look-alike, I might add. We already see this kind of behavior with
> phishing and we have continued to see it for the better part of a decade.

yes, but good guys and brand protection companies routinely look for lookalike
domains and phishing activity, both passively (zillions of honeypot mailboxes) 
and actively
(looking at dns activity).


> 
> There are obviously smart heuristics for ways to flag a message -
> however, if I was pwning such a system, I would just own the content
> inspection system at a different level - say, by fingerprinting the
> first request and not returning malware. Only when the user, who is easy
> to distinguish from Microsoft, visits the site will they get the actual
> targeted malware. This is also what we see with web pages that provide
> browser specific exploits on a per user basis.
> 

right.  because one needs the right credentials to see the malicious payload,
microsoft is supplying the complete URLS.  makes sense to me.

yup.   the earliest hits on a brand new malicious web site, before a
spam campaign is deployed, are likely to be AV/security companies, their 
hosting facility,
and some crawlers trying to discover new content, but also the bad guys testing 
their content prior to deployment.  the more stupid criminals deliver
payloads in such circumstances (because they don't have to be smart to succeed).

the smarter criminals filter based on ip address, initially.  you have the 
wrong address, you get 
a 404.  sometimes they're too smart for their own good, and whitelist their own 
c&c addresses,
oops.

by shutting sites down at the earliest point, we only train the criminals to 
know how we must
have found them, and become smarter.  

we have already trained the bad guys to lovingly age their sites (10 months in 
french oak?)
before using them for malicious activities, because some of us rely on a simple 
metric such as 
site age, rather than site behavior (which is more expensive to fake).

(i expect microsoft is not being so naive is to conclude because there is no 
early malicious
activity that the site is clean, as opposed to "not yet dirty".   this means 
they need to continue
testing, by the way.)


> The other reason to get the buddy list is that the social graph is
> almost as important as the content, if not more important for some groups.
> 

very true.  both for the good guys and the bad guys.  

> All the best,
> Jacob
> _______________________________________________
> cryptography mailing list
> cryptography@randombit.net
> http://lists.randombit.net/mailman/listinfo/cryptography

_______________________________________________
cryptography mailing list
cryptography@randombit.net
http://lists.randombit.net/mailman/listinfo/cryptography

Reply via email to