On Sat, Oct 24, 2015 at 21:05:20 +0200, Alexander Berntsen wrote: > For what it's worth I find your argument unconvincing anyway, as you > have the chicken and egg problem with compilers, and almost everyone > is using non-free non-documented hardware. And even free hardware with > free documentation could trick you. Unless you make literally > everything yourself, and interface with nothing, you may be tricked by > your computer.
Those are important issues, but require their own discussions, and are not important for this discussion. SaaSS is something that is practical to avoid, here and now. SaaSS does not need to trick the user to be bad---we know it's bad. I'm not being tricked into thinking otherwise. > We are mitigating this e.g. with free software. I believe we can > mitigate it further with cryptography You are speaking very generally. There are many issues with SaaSS, and cryptography is part of means to mitigate, but it alone does not present a solution to all of the problems of SaaSS. Consider an ambitious and seemingly ideal scenerio: a distributed, anonymous, fully homomorphic cryptosystem[0] running only free software (so that the software running on the services services can be studied) in which a user can use response variations to attempt to detect malicious systems or inconsistencies in operation (to the extent that is meaningful). Let's assume that the system is large enough that there are not enough malicious nodes in this system to compromise its integrity (by providing consistently bad responses); similar considerations as with Tor. The system can't (let's assume) discover your identity. It can't spy on your data. It can't manipulate your data in ways that you wouldn't expect if you were running your own instance of that software. But you still can't modify the running instances. In fact, if you did, then that one instance would return different results than all the others, and would be flagged as producing inconsistent results, and would be removed from the network; this, in fact, makes this type of system that I'm describing difficult to implement usefully in the first place. So you have still given up your control. You can only do the type of computing that others say you can do, and if you try to say, "I have an idea; let's do it this way!", then unless everyone else agrees with your changes, then you are told that you can't compute like that---it's not allowed. If you get rid of that distributed nature, then we're back to where we are today: pretty much the same place, but less dystopian. > and sophisticated type systems. There is programming languages theory > research going into this right now. We're talking about two different things. You're talking about tricking hackers/programmers/users who read the code. Even if all software were written in a system like Coq, and all of it were formally proven to operate exactly as it was designed, the above issues would still stand. Fundamentally, you'll always be able to trick others with code (but hopefully someone will notice at some point in time): you don't need a lack of a decent type system or hard-to-grok programming language to do that. Someone might just not understand what you're doing, plain and simple, even though it's perfectly clear to the author. [0]: https://en.wikipedia.org/wiki/Homomorphic_encryption -- Mike Gerwitz Free Software Hacker | GNU Maintainer http://mikegerwitz.com FSF Member #5804 | GPG Key ID: 0x8EE30EAB
signature.asc
Description: PGP signature
