Nathan of Guardian: > > > On Mon, Feb 23, 2015, at 08:31 AM, Hans-Christoph Steiner wrote: >> >> I highly recommend reading this article about the latest from the Snowden >> leaks: >> >> https://firstlook.org/theintercept/2015/02/19/great-sim-heist/ > > Just so I am thinking about this correctly, this still requires the > attacker to intercept the over the air radio signals between the phone > and the tower, and not somewhere upstream on the net, right? Not that it > wouldn't be hard for a well-funded adversary to do this, or for this to > be done, say using a stingray device mounted on an airplane...
Over-the-air intercepts would be decryptable with the SIM card keys. And like Matej says, they could then _passively_ intercept the radio communications then decrypt, making it extremely difficult to detect when its happening. Imagine trying to detect whether someone has a radio that is listening to a given FM channel, that's the same idea. But I imagine that cell towers are not responsible for decrypting the messages and that happens deeper in the telecoms' networks. So then, the anyone who can tap the telecom's network can get all the encrypted messages. This is something that we have seen many governments around the world setup. For example, Libya's telecom network infrastructure was built entirely around the idea that all traffic had to go through the central intercept and control point. Another key example here is that most of Ukraine's telecom traffic goes through Russia somehow. >> What is directly relevant for anyone who is working with well funded >> state >> actors in mind, is the description of the whole process of profiling >> organizations in order to find out how they can be infiltrated. For the >> most >> part, it sounds like Gemalto's security was terrible (i.e. plain text FTP >> for >> sending secret keys), but it sounds like they improved it a lot, and >> those >> targeting Gemalto had to find new approaches for getting key material. > > Back to our Google Play phishing attack... thinking how someone without > 2-factor auth enabled, and some sort of locked down policy on app > updating might have gotten their account compromised. This would have > resulted in, say, a new APK being published, but with a different signed > key. Still a good attack, as many users would simply uninstall and > reinstall to get the upgrade, thinking it was their fault the update had > a problem. Yes, for existing apps, the attacker could not transparently update existing apps without having the signing key. But this attacker could also be working on getting private key material. For example, by using browser/javascript/flash/etc exploits in the phishing link that then runs code to upload all key ring files it could find. This exploit could also contain a keylogger that looks for the key password. These kinds of exploits are all documented in the materials on Finfisher, for example, or from various pwn2own competitions. One takeaway here: developers should NEVER copy or use their APK signing keys on a machine that they also use to read email and browse the web. .hc -- PGP fingerprint: 5E61 C878 0F86 295C E17D 8677 9F0F E587 374B BE81 https://pgp.mit.edu/pks/lookup?op=vindex&search=0x9F0FE587374BBE81 _______________________________________________ List info: https://lists.mayfirst.org/mailman/listinfo/guardian-dev To unsubscribe, email: [email protected]
