--- Quasar Strider <[EMAIL PROTECTED]> wrote: > On 9/7/07, Matt Mahoney <[EMAIL PROTECTED]> > wrote: > > > > --- Quasar Strider <[EMAIL PROTECTED]> > wrote: > > > > > Hello, > > > > > > I see several possible avenues for implementing > a self-aware machine > > which > > > can pass the Turing test: i.e. human level AI. > Mechanical and > > Electronic. > > > However, I see little purpose in doing this. > Fact is, we already have > > self > > > aware machines which can pass the Turing test: > Humans beings. > > > > This was not Turing's goal, nor is it the > direction that AI is headed. > > > I keep seeing people attempting to build electronic > brains, or stems, of > increasing complexity: > http://en.wikipedia.org/wiki/Blue_Brain > > If I extrapolate indefinitely using Moore's Law I > get a point where it > crosses over the human level threshold. We are > currently near a > semiconductor fabrication limit but there is no > obvious scientific barrier > to building such a thing. The living proof of > concept is inside our heads. > > What is the direction that AI is headed then?
AI, as a mainstream research effort, is headed towards developing better video game characters and automated telephone systems. > As for the man Alan Turing: I believe his goal was > merely happiness and > companionship. He pursued it using maths and > computers merely because he was > good at it. It was World War II. He tried to help > his people survive it. > > Turing's goal was to define artificial intelligence. > The question of > > whether > > > I believe Alan Turing defined human level AI. > > As for AI, I believe it has existed since the > Jacquard loom, or perhaps even > before that. > I consider a mechanical clock to be a primitive > fixed function computer. > It seems the Antikythera mechanism could even be > programmed by hand. None of those computers showed any sign of intelligence, or even rational decision making. At least a video game character can decide among many possible actions with the end goal of getting from point A to point B. > consciousness can exist in a machine has been > debated since the earliest > > computers. Either machines can be conscious or > consciousness does not > > exist. > > The human brain is programmed through DNA to > believe in the existence its > > own > > consciousness and free will, and to fear death. > It is simply a property > > of > > good learning algorithms to behave as if they had > free will, a balance > > between > > exploitation for immediate reward and exploration > for the possibility of > > gaining knowledge for greater future reward. > Animals without these > > characteristics did not pass on their DNA. > Therefore you have them. > > > I agree. I believe we are biomechanical machines and > a result from > evolution. There is no use denying the fossil proof. > > Turing avoided the controversial question of > consciousness by equating > > intelligence to the appearance of intelligence. > It is not the best test > > of > > intelligence, but it seems to be the only one that > people can agree on. > > > > The goal of commercial AI is not to create humans, > but to solve the > > remaining > > problems that humans can still do better than > computers, such as language > > and > > vision. You see Google making progress in these > areas, but I don't think > > you > > would ever confuse Google with a human. > > > I see these attempts as the electronic equivalent of > a spotter dog or > falcon. > I have little qualms or reservations against such a > technology. > > I beg you to remember: > A dog has keen earing but poor eyesight. They also > respect us as leaders of > their pack. > I doubt a self aware piece of software with greater > than human grade AI, > networked into all the worlds computers, omnipresent > and omniscient, would > see any one of us as a pack leader. No AI, unless it was very carefully designed, would see anything as a "pack leader"; indeed, it wouldn't know what you meant by "pack". > Modern trends towards remotely automated aircraft, > like the X-45 UCAV, > carrying live weapons, do not make me ease my mind > one bit. Any human-level AGI could easily take over the world anyway, through the Internet. > We could be > automating ourselves into oblivion. It used to be > that we did not trust a > nuclear missile computer, and required two separate > people, each with his > own key, to launch a computerized weapon. How easily > do we forget. That we forget what? Safety features are required? We still have safety features on all important computer systems. All spacecraft, past, present and future, have redundant computers to guard against failure. > I think > we are creating our own pet T-Rex and hoping he will > be contented by > following our lead. We are so security fixated and > afraid of death, we are > putting all our trust into machines, when our > enemies are just a bunch of > insane and poor people in the desert. We need human > police not Terminator > packs. Please read http://www.acceleratingfuture.com/tom/?p=12, or at least stop using the "I-saw-it-in-the-movies-it-must-be-true" argument. > I believe we are living under a security > paranoia. > > Google is a symbiotic collective of individual > people and machines. > A person is a symbiotic collective of individual > cells. A rock is a symbiotic collection of individual chemical compounds. All hail the granite overlords! > Chilled? I am against Torture and the Death Penalty, > enjoy the company of > children and old people. I do not own a gun. > > I am e-mailing this from a Google account, even > knowing that they can easily > filter everything. Some people suspect they comb > everything for recruiting, > or more sinister, purposes. That is what public-key encryption is for, if you're emailing anything sensitive. > Some of us, in the > community, do not trust > Google because it is a secretive entity which > allowed China to censor the > population inside its borders. A secretive entity in > a country with the > death penalty, which attempted to jail Dmitry > Sklyarov and DVD Jon, > enforcing torture against their own citizens in > Guantanamo Bay under charges > which skirt the Geneva convention and their own > Constitution. Heck Alan Cox > does not wish to even enter the USA anymore. > > I personally believe it would be hard to attract so > many smart people > without convincing them that you are doing the right > thing. The best of us > always worked on what we wished, not what someone > else told them to do. This > has been the case since the dawn of time. > > I also believe that it is better to have a Pax > Americana than to enter the > Dark Ages again. > > I judge people by their actions and then speak. > Corporations are made from > people. From my point of view Google and the USA are > beacons of light in the > darkness. I may not agree with everything either of > these entities do but I > understand their reasons. Sometimes concessions must > be made in the name of > progress. > > I hope Google and the USA will keep true to their > motto and Constitution. > > > We do not need direct neural links to our brain to > download and upload > > > childhood memories. > > > === message truncated === - Tom ____________________________________________________________________________________ Take the Internet to Go: Yahoo!Go puts the Internet in your pocket: mail, news, photos & more. http://mobile.yahoo.com/go?refer=1GNXIC ----- This list is sponsored by AGIRI: http://www.agiri.org/email To unsubscribe or change your options, please go to: http://v2.listbox.com/member/?member_id=4007604&id_secret=39778446-bff7be
