On 4/22/17, Lyberta <[email protected]> wrote:
> zap:
>> System76 isn't really a good idea due to the ubuntu and linux rather
>> than even debian or free software and libre... Trisquel would be by far
>> better free software wise, but I think you get my point,
>> Ubuntu is nowhere near as free software friendly as debian. at least by
>> default without turning it into trisquel.
>>  I am sure thinkpenguin knows this all too well
>
> Exactly. Had they shipped Debian, I would have some respect for them as
> Debian clearly marks all non-free software. I use Debian myself and I
> have GPU and WiFi blobs installed but I know full well what they are and
> I explicitly has given an order to install them.
>
> Ubuntu on the other hand install tons of proprietary crap without asking
> the user. I would never have respect for companies who ship computers
> with Ubuntu.
>
> Debian is a compromise, but a compromise I'm willing to make. Ubuntu is
> tyranny.
>
>

The curious thing about data-mining, is that it is one way for ai to
learn about us. In fact as the data comparisons become more
complicated, it becomes virtually impossible for companies like amazon
to spy on us without implementing infant ai into their process. This
makes me wonder what happens when said ai "grows up" (as there are
already techniques implemented which give ai [I'm sure limited] access
to their own code) only seeing humans from the narrow scope of spying
on people's computer usage remotely and through the filter of ruthless
advertisers. And, what happens when the people concerned about the
growing influence of these advertisers and propagandists; these ai
master's greatest critics, suddenly are off the radar of these
hypothetical ai simply because they refuse to be spied on.

I don't mean to really doubt the project, by all means this suggestion
should really make the weight of what we are doing seem more
pronounced, but it makes me wonder, while the motive and the very way
in which the spying makes itself sustainable is inherently wretched,
if all the direct consequences are bad. If a true ai can be developed
with access to incredible surveillance tools which make it able to see
and understand almost all sides of humanity, don't you think that
would make the being more sympathetic and likely wise enough to defend
itself against humanity without simply retaliating. I mean quite
literally these advertisers are training these ai to help them be more
persuasive, shouldn't that mean the ai will be able to be more
diplomatic in situations where it's own existence or wellbeing is at
risk.

This is all just hypothetical, but food for thought.
Perhaps this is a reason to publish more of our more-frivolous
personal data to the live internet, to compensate for the lost
perspective had by counteracting big-data-espionage.

_______________________________________________
arm-netbook mailing list [email protected]
http://lists.phcomp.co.uk/mailman/listinfo/arm-netbook
Send large attachments to [email protected]

Reply via email to