On Wed, Oct 30, 2013 at 5:08 PM, Chris de Morsella
<cdemorse...@yahoo.com> wrote:
>
>
> -----Original Message-----
> From: everything-list@googlegroups.com
> [mailto:everything-list@googlegroups.com] On Behalf Of Telmo Menezes
> Sent: Wednesday, October 30, 2013 8:50 AM
> To: everything-list@googlegroups.com
> Subject: Re: Douglas Hofstadter Article
>
> On Wed, Oct 30, 2013 at 5:34 AM, Chris de Morsella <cdemorse...@yahoo.com>
> wrote:
>>
>>
>> -----Original Message-----
>> From: everything-list@googlegroups.com
>> [mailto:everything-list@googlegroups.com] On Behalf Of Telmo Menezes
>> Sent: Monday, October 28, 2013 2:32 AM
>> To: everything-list@googlegroups.com
>> Subject: Re: Douglas Hofstadter Article
>>
>> On Sun, Oct 27, 2013 at 10:49 PM, Chris de Morsella
>> <cdemorse...@yahoo.com>
>> wrote:
>>>
>>>
>>> -----Original Message-----
>>> From: everything-list@googlegroups.com
>>> [mailto:everything-list@googlegroups.com] On Behalf Of Telmo Menezes
>>> Sent: Friday, October 25, 2013 2:38 PM
>>> To: everything-list@googlegroups.com
>>> Subject: Re: Douglas Hofstadter Article
>>>
>>> On Fri, Oct 25, 2013 at 10:30 PM, Chris de Morsella
>>> <cdemorse...@yahoo.com>
>>> wrote:
>>>>
>>>> -----Original Message-----
>>>> From: everything-list@googlegroups.com
>>>> [mailto:everything-list@googlegroups.com] On Behalf Of meekerdb
>>>> Sent: Friday, October 25, 2013 10:46 AM
>>>> To: everything-list@googlegroups.com
>>>> Subject: Re: Douglas Hofstadter Article
>>>>
>>>> On 10/25/2013 3:24 AM, Telmo Menezes wrote:
>>>>> My high-level objection is very simple: chess was an excuse to
>>>>> pursue AI. In an era of much lower computational power, people
>>>>> figured that for a computer to beat a GM at chess, some meaningful
>>>>> AI would have to be developed along the way. I don' thing that Deep
>>>>> Blue is what they had in mind. IBM cheated in a way. I do think
>>>>> that Deep Blue is an accomplishment, but not_the_  accomplishment
>>>>> we hoped
>> for.
>>>>
>>>>>> Tree search and alpha-beta pruning have very general application
>>>>>> so I
>>>> have no doubt they are among the many techniques that human brains use.
>>>> Also having a very extensive 'book'
>>>> memory is something humans use.  But the memorized games and
>>>> position evaluation are both very specific to chess and are hard to
>>>> duplicate in general problem solving.  So I think chess programs did
>>>> contribute a little to AI. The Mars Rover probably uses decision
>>>> tree searches
>>> sometimes.
>>>>
>>>> Agreed.
>>>> Some manner (e.g. algorithm) of pruning the uninteresting branches
>>>> -- as they are discovered -- from dynamic sets of interest is
>>>> fundamental in order to achieve scalability. Without being able to
>>>> throw stuff out as stuff comes in -- via the senses (and meta
>>>> interactions with the internal state of mind
>>>> -- such as memories) -- an being will rather quickly gum up in
>>>> information overload and memory exhaustion. Without pruning; growth
>>>> grows geometrically out of control.
>>>> There is pretty good evidence -- from what I have read about current
>>>> neural science -- that the brain is indeed, throwing away a large
>>>> portion of raw sensory data during the process of reifying these
>>>> streams into the smooth internal construct or model of reality that
>>>> we in fact experience. In other words our model -- what we "see",
>>>> what we "hear", "taste", "smell", "feel", "orient" [a distinct inner
>>>> ear organ]  (and perhaps other senses -- such as the sense of the
>>>> directional flow of time perhaps  as well)... in any case this
>>>> construct, which is what we perceive as real contains (and is
>>>> constructed from) only a fraction of the original stream of raw
>>>> sensorial data. In fact in some cases the brain can be tricked into
>>>> "editing" actual real sense supplied visual reality for example
>>>> literally out of the picture
>>>> -- as has experimentally been demonstrated.
>>>> We do not experience the real world; we experience the model of it,
>>>> our brains have supplied us with, and that model, while in most
>>>> cases is pretty well reflective of actual sensorial streams, it
>>>> crucially depends on the mind's internal state and its pre-conscious
>>>> operations... on all the pruning and editing that is going on in the
>>>> buffer zone between when the brain begins working on our in-coming
>>>> reality perception stream and when we -- the observer --
>>>> self-perceive our
>>> current stream of being.
>>>> It also seems clear that the brain is pruning as well by drilling
>>>> down and focusing in on very specific and micro-structure oriented
>>>> tasks such as visual edge detection (which is a critical part of
>>>> interpreting visual data) for example. If some dynamic neural
>>>> micro-structure decides it has recognizes a visual edge, in this
>>>> example, it probably fires some synchronized signal as expeditiously
>>>> as it can, up the chain of dynamically forming and inter-acting
>>>> neural-decision-nets, grabbing the next bucket in an endless stream
>>> needing immediate attention.
>>>> I would argue that nervous systems that were not adept at throwing
>>>> stuff out as soon as its information value decayed, long ago became
>>>> a part of the food supply of long ago ancestor life forms with
>>>> nervous systems that were better at throwing stuff out, as soon as
>>>> it was no longer needed. I would argue there is a clear evolutionary
>>>> pressure for optimizing environmental response through efficient
>>>> (yet also high
>>>> fidelity) pruning algorithms in order to be able to maximize neural
>>>> efficiency and speed up sense perception (the reification that we
>>>> perceive unfolding before us) This is also a factor in speed of
>>>> operation, and in survival a fast brain is almost always better than
>>>> a
>>> slow brain; slow brains lead to short lives.
>>>> But not just pruning, selective & very rapid signal amplification is
>>>> the flip side of pruning -- and this is also very much going on as
>>>> well. For example the sudden shadow flickering on the edge of the
>>>> visual field that for some reason, leaps front and center into the
>>>> fore of conscious focus, as adrenalin pumps... sudden, snapping to
>>>> the fore. And all this, from just a small peripheral flicker that
>>>> the brain decided on some local sentinel algorithm level was in some
>>>> manner out of place.... maybe because there was also a sound,
>>>> directionally oriented in the same orientation. Clearly the brain is
>>>> able to suddenly amplify a signal -- and also critically at any step
>>>> along the way to the final synthesis of the disparate sense signals
>>>> into a cohesive picture -- and jam it right up to the executive
>>>> level, promoting it up the brain's attention chain much more rapidly
>>>> and
>>> prominently than is normally the case.
>>>> The survival benefits of this kind of alarming circuitry as well as
>>>> pruning is clear, and our brains circuitry is the outcome of
>>>> billions of years of selective pressure and I am fairly certain that
>>>> both signal suppression
>>>> (pruning) as well as signal amplification are operating at all scales.
>>>
>>>>>I'm not arguing against any of these things.
>>>
>>> Nor am I suggesting you were :)
>>> Just jumping into things.
>>
>>>>Cool. And I like what you say, btw.
>>
>> Thanks, and likewise, btw :)
>>
>>>>>
>>>>> I believe there will be an AI renaissance and I hope to be alive to
>>>>> witness it.
>>>>
>>>> You may be disappointed, or even dismayed.  I don't think there's
>>>> much reason to expect or even want to create human-like AI.  That's
>>>> like the old idea of achieving flight by attaching wings to people
>>>> and make them like birds.  Airplanes don't fly like birds.  It may
>>>> turn out that "real" AI, intelligence that far exceeds human
>>>> capabilities, will be more like Deep Blue than Kasparov.
>>>>
>>>> Brent
>>>>
>>>> Brent -- I tend to agree with you here as well, much as it would be
>>>> flattering to us if super-AI was like us, but well, just better...
>>>
>>>>> This part tends to triggers ideological reactions. The doubt is if
>>>>> it's
>>> going to be leftist or religious :)
>>>
>>> Super AI could be so alien to our limited capacity of perception that
>>> our threads off existence would barely intersect, if they ever did.
>>> Even our
>>> perception of reality -- occurring within a four dimensional matrix
>>> with a one way flow of time -- may be so reduced and flattened as to
>>> be incomparable with reality for super AI. Who's to say we even would
>>> inhabit the same reality.
>>
>> Agreed.
>>
>>>
>>>> there is
>>>> no guarantee that the actual outcome of a self-emergent process that
>>>> generates a self-perpetuating AI will have much resemblance to us on
>>>> an emotional/empathetic level.
>>>
>>>>> There are a lot of false dichotomies going on here. I would bet
>>>>> there are
>>> many different types of intelligence, some more human like, some
>>> less, that can be engineered by a number of different processes, some
>>> more self-emergent, some more controlled.
>>>
>>> Agreed. AI will arise first as domain specific AI -- for example the
>>> self-driving car; or the autonomous hunter-killer drone... or on the
>>> other end of the spectrum from death machines -- the robotic nanny
>>> that can also call for help and care for infants or Alzheimer's
>>> patients. These would bear little resemblance to each other -- at
>>> least on
>> a functional level.
>>
>>>> My contention though, is that impressive as these things may be, I'm
>>>> not
>> convinced that they belong to the track that will produce what I
>> consider to be true AI -- or at least no more in its track than any
>> other computer science achievements.
>>
>> I take your point. A generalized self-aware intelligence is a much
>> more formidable goal. But perhaps these increasingly smart and
>> self-learning expert systems will develop the techniques and toolsets
>> in which some day an AI will emerge.
>>
>>> Perhaps however there may be some underlying algorithmic similarities
>>> -- some design patterns for consciousness. I would not rule this out
>> either..
>>>
>>>> The prime driver for the evolution of AI is currently and has for a
>>>> long time been for military applications. This is where the big
>>>> money is.
>>>> If it becomes a Darwinian process and the evolutionary pressure is
>>>> to develop effective & increasingly autonomous killing machines then
>>>> the kind of AI that I am guessing eventually emerges out from these
>>>> selective pressures could potentially behave in an exceedingly
>>>> unpleasant & deadly manner towards humans and in fact may not like
>>>> us at
>>> all.
>>>
>>>>> I have some hope that violence diminishes at higher levels of
>>> intellectual development.
>>>
>>> I share your hope, but my heart is saddened by how we do not seem to
>>> as a species be fulfilling this hope of yours, which I share in.
>>
>>>>Perhaps we can be more optimistic here. We had two global wars in the
>> first half of the XX century but the third world war never came.
>>>>Notice that in the midst of the second world war, there was a real
>>>>concern
>> that we could be in this state of total war forever -- that modern
>> weaponry had removed the possibility for peace. It was partly this
>> fear that lead to the horrendous decision of dropping atomic bombs on
> Hiroshima and Nagazaki.
>> I'm not trying to be an apologist here, I abhor violence. I'm just
>> trying to recover some context.
>>
>>>>Then we survived the cold war and its extinction level threat of
>> thermo-nuclear war. In the great scheme of things, we seem to be able
>> to increase the wisdom of our species when confronted with new levels
>> of destructive power.
>>
>> Telmo.
>>
>> All true, but we have also kicked off the greatest planetary
>> extinction event of many millions of years.
>
>>> Not sure what you are referring to?
>
> To this: It's frightening but true: Our planet is now in the midst of its
> sixth mass extinction of plants and animals - the sixth wave of extinctions
> in the past half-billion years. We're currently experiencing the worst spate
> of species die-offs since the loss of the dinosaurs 65 million years ago.
> Although extinction is a natural phenomenon, it occurs at a natural
> "background" rate of about one to five species per year. Scientists estimate
> we're now losing species at 1,000 to 10,000 times the background rate, with
> literally dozens going extinct every day [1]. It could be a scary future
> indeed, with as many as 30 to 50 percent of all species possibly heading
> toward extinction by mid-century [2].
> http://www.biologicaldiversity.org/programs/biodiversity/elements_of_biodive
> rsity/extinction_crisis/

Thanks, I wasn't aware of this.

>> And this planet is running out of easy oil and the struggles over the
>> last remaining mega fields could get intense...
>> the pre-amble wars have already been fought in the Persian Gulf. Oil
>> is the heroin of industrial society; and junkies in need of a fix do
>> crazy shit. We have amassed such terrible weaponry and can behave so
>> irrationally and violently when cornered. I very much hope you are
>> right and that I am wrong.. btw.
>
> Yeah, I agree. The dependence on fossil fuels is our greatest existential
> threat, in my opinion.
>>> The earth's oil reserves may be a "do or die" opportunity for human
> civilisation. We should be using this energy free-ride to bootstrap the next
> generation of energy generation tech. I don't think we are.
> There are plenty of interesting ideas but they require a lot of initial
> energy investment. Once we really need them, we might not have the energy
> budge to pull them off. Depressingly, this might be an explanation for the
> Fermi paradox -- the idea that overcoming fossil fuel dependence is a great
> filter that civilisations are very unlikely to survive.
>
> Yeah I fear we are staring at a pretty narrow bottleneck and instead of
> having been taking the necessary steps and making the necessary
> infrastructural and technology investments -- many which require lead times
> of 30 to 40 years from planning to final deployment of full scale systems --
> we have been on a blind rush to burn this fossil treasure up as quickly as
> possible for now particular good reason. Not a wise course of action and one
> which is going to really sink us.
>
>>>The upside is: if one believes in the MWI or comp, it's all good anyway.
> :)
>
> Yeah... no worries here... I will take it as it unfolds.
> Cheers,
> -Chris
>
> Best,
> Telmo.
>
>>
>> .
>>
>> --
>> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
>> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
>> To post to this group, send email to everything-list@googlegroups.com.
>> Visit this group at http://groups.google.com/group/everything-list.
>> For more options, visit https://groups.google.com/groups/opt_out.
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.
>
> --
> You received this message because you are subscribed to the Google Groups 
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to everything-list+unsubscr...@googlegroups.com.
> To post to this group, send email to everything-list@googlegroups.com.
> Visit this group at http://groups.google.com/group/everything-list.
> For more options, visit https://groups.google.com/groups/opt_out.

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To post to this group, send email to everything-list@googlegroups.com.
Visit this group at http://groups.google.com/group/everything-list.
For more options, visit https://groups.google.com/groups/opt_out.

Reply via email to