It seems that so called intelligence consists of many different abilities. A dangerous ASI would need to have a lot of them on level exceeding normal human. People would notice AI that achieve even human-level in one of them, for example if it passed the Turing test.
Dangerous ASI must be able to modify its source code - which requires programming skill of much much higher level then of average programmer. Before that happen a level of an average programmer need to be obtained. We would notice it, or at least most programmers would notice as they would lose their job. Dangerous ASI must be able to predict people behaviour, manipulate them and conceal its intention - this is a completely different task than programming, but still very useful. A virtual marketing person, salesmen, lobbist or sociologist require not as high level of this skill as ASI. Moreover, there is a race to obtain even better AI. So even if one AI is exceptionally better - there would be other, slightly worse, controlled by other players who could stop it. On Tue, Feb 7, 2017 at 5:08 PM, justcamel <[email protected]> wrote: > What about some paranoia about the really existing threats? What about > some paranoia about the process which has already turned half the planet > into people rooting for Trump, Duterte, Erdogan and Co.? As I wrote 5 years > ago ... society might collapse under the pressure of our monetary system, > fictional debt, artificial scarcity and billions of #bullshitjobs long > before AGI becomes a reality. You have some serious shit here to be > paranoid about ... a process reprogramming our collective psyche and > culture ... a process directly leading us to yet another world war and > ultimately societal collapse ... > > > On 05.02.2017 06:32, TimTyler wrote: > >> In his recent "Provably Beneficial AI"2017 Asilomar conference talk: >> >> -https://www.youtube.com/watch?v=pARXQnX6QS8 >> >> ...Stuart Russell argues that more paranoia would have helped >> the nuclear industry - by making it more risk-conscious and >> helping it to avoid Chernobyl. By analogy, more paranoia >> would help us avoid disasters involving machine intelligence. >> >> IMO, the cases where more paranoia is helpful do exist. >> Crossing roads, getting into motor vehicles and eating >> sugar are all cases where more caution seems as though >> it would be prudent. >> >> These are the exceptions, though. We are in a relatively safe >> environment while our brains and emotions evolved in an >> environment where predators lurked around every water hole. >> As a result, most humans are dysfunctionally paranoid. This >> has been well documented by Dan Gardner in the book >> "Risk: Why We Fear the Things We Shouldn't - and Put >> Ourselves in Greater Danger". >> >> Irrational fear of vaccines has killed a large number >> of people. Irrational fear of GMOs causes large scale >> problems in the distribution of food. Irrational fear of >> carbon dioxide has caused a $1.5 trillion dollar year >> expenditure on getting rid of the stuff. >> >> Stuart Russell's own example counts against his thesis: >> Irrational fear of nuclear power is the problem that prevents >> its deployment - causing many more deaths in coal mines as >> a direct consequence. In fact, nuclear power is - and always >> has been - a very safe energy-producing technology. >> >> More caution does not typically lead to better outcomes. More >> caution systematically and repeatedly leads to worse outcomes. >> Humans are typically too paranoid for their own good. This is >> the basic problem with fear-mongering and promoting risks: >> the net effect on society is negative. >> >> I don't have to go on about this too much because >> Max More has done my work for me: >> >> If you haven't done so before, go and read: >> >> http://www.maxmore.com/perils.htm >> >> Machine intelligence is having its own bout with the precautionary >> principle at the moment - in the case of self-driving cars, trucks, >> boats, trains and planes. We look set to cause a large number >> of pointless deaths by throttling these technologies using the >> precautionary principle. Let's calculate the number who lose their >> lives during this process - so the costs of failing to deploy >> intelligent machines are made very clear. >> >> > > > ------------------------------------------- > AGI > Archives: https://www.listbox.com/member/archive/303/=now > RSS Feed: https://www.listbox.com/member/archive/rss/303/28565694-f30243b8 > Modify Your Subscription: https://www.listbox.com/member > /?& > Powered by Listbox: http://www.listbox.com > ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com
