> On Fri, Apr 19, 2013 at 10:37 PM, Andrew G. Babian
> <[email protected]>wrote:
>
>>
>> So to throw something somewhat more positive out there,  I just looked
>> at
>> the website of the people working at Google Research.  They've got
>> literally tons of people in areas like machine perception, AI, machine
>> learning, machine translation.  It does give me the feeling that there
>> are
>> people, and with enough plugging, they will eventually get AGI as just a
>> natural progression.  Of, course, I think they field and the stuff they
>> use
>> has some missing bits, but that's just me.  You all can tilt at all the
>> windmills y'all want, but they have money and talent at a level we can't
>> approach.
>>

On Fri, April 19, 2013 9:35 pm, Ben Goertzel wrote:
> I have visited Google Research in Mountain View a number of times, and I
> know a bunch of researchers there fairly well...
>
> Of course their staff are intelligent and talented and so forth....  And
> they are well paid and have a lot of data and computing resources.
>
> I don't think their staff are supernaturally talented or anything like
> that....  Some of the folks I am working with on AI, in Hong Kong and
> Addis
> Ababa, are every bit as talented and clever as the Google Research
> staff....  Silicon Valley does not have a monopoly on brilliant tech
> talent, though they may well have the world's best publicists ;-) ..

me (agb):
I'm sorry, I didn't mean to say Ben didn't have some great people, I just
meant that they had a very large quantity of talent, and for big projects,
that can make a difference.  Because at this point, I'm not sure we are
really lacking crucial brilliant insights, though I think it's likely that
there are quite a few really hard details to be worked out.


Ben:
> In the end, only a very miniscule portion of the resources of Google -- or
> any other current large tech company -- is oriented toward AGI in any
> direct or semi-direct way.  When AGI is pursued within these firms, it's
> currently in teeny-tiny skunkworks projects....  And these skunkworks
> projects tend to get quasi-randomly dissolved when corporate priorities
> change (e.g. Sam Adam's now-dormant Joshua Blue AGI project at IBM; some
> previous Google AGI skunkworks projects I know about via personal
> commmunications...)


me (agb):
It's probably not a small problem at all.  In fact, I would guess it's a
matter of searching the space of problems, and in a search, it's helpful
to have a lot more eyes on the problem.


Ben:
> So, consider the two possibilities:
>
> A)
> A large company with a teeny skunkworks AGI team, plus a lot of smart guys
> working on other projects peripherally related to AGI
>
> B)
> A small team working outside any large company or institution, with
> uncertain but non-zero funding, but focused directly on AGI
>
> ... Is it really so obvious that A is going to get to the end goal before
> B?  I don't think so....  Based on general common sense, it seems either
> one is possible....
>
> There is, of course, a scientific question here: Whether AGI can be
> achieved by basically integrating a bunch of components created for
> non-AGI
> purposes, with some sort of relatively simple "AGI controller" layered on
> top of it....  I personally don't think this can work.

me (agb):
Well, if it's just something simple, obviously it can't work.  It's a
little unfair to argue against that.  Such a controller would be AI
complete in itself.  But the power of some of these dedicated seems is so
strong that they may well solve some of the problems that the brain
handles with massive hardware.  Matt talks about how much computing power
the brain uses.  That is a real thing.  Lower power, supermassive
parallelism with extremely fine and dynamic learning.  It simply will take
a lot for an adding machine to come anywhere close to that level, throwing
in all the tricks we can manage.



Ben:
> I think that even
> if the **ideas** underlying a bunch of narrow-AI components are sufficient
> to guide the creation of modules of an AGI system, in actual practice, the
> way narrow-AI systems are written generally precludes their integration
> into an AGI framework....   Integrating components into an AGI framework
> generally requires allowing each component to infuse knowledge and
> guidance
> into the others at a deep level, and generally narrow-AI software is not
> designed or coded to allow this; and redesigning a piece of narrow-AI
> software in such a way requires a lot of deep thinking as well as hard
> engineering....   I have been involved with this sort of work a lot...


me (agb):
I agree with this need to have the systems modified for better integration.
One of the problems with AI systems is there extremely narrow focus.  And
generally, they aren't set up to be interactive or accept fine control
from other systems.  It might turn out that such narrow programs might be
modified to be accessible to a more general system.  Watson, for one,
integrates the results of different modules, and has a separate learning
module for evaluating how much confidence it has in the different modules.
But short of modifying them, It might be possible to keep them as separate
black boxes, in addition to some general mechanism.  There is a notion of
having a democratic system of organization for an intelligent system,
like the pandemonium model, and that really seems like the only way
a generally intelligent system can work.  Try everything, learn many
different methods.  There is ample evidence that even human thinkers
all think in different ways, and just have to manage organizing
all their competing pieces.  And it doesn't always work so well, but we
learn.

But as for the issue of small dedicated vs. skunkworks, I do think having
just more people looking at the problem is an advantage.  I don't think
it is so much a matter of brilliant insight as solid work on the problem,
with more people looking.  And as I have said, I think people are getting
stuck with a bias toward something that is more controllable, reliable,
and uses their insights into the problems instead of having the computer
figure everything out for itself.  The overprotective mother problem, you
might call it.

Ben:
> Finally, and hopefully without being insulting to anyone, I would like to
> point out that the folks who post on this list are not remotely
> representative of the community of "AGI researchers unaffiliated with
> large
> corporations." ....  The folks who choose to spend a lot of time reading
> and writing on AGI e-mail lists form a quite particular sub-population.
> On
> average, they tend to have fewer professional qualifications and less
> funding for their work, than plenty of other AGI researchers out there..

me (agb):
Granted.  This list isn't intended for the big guys.  I know some of them
too.  I was getting a little bit grumpy at the unnecessarily shrill tone. 
I think it has settled down again.  And I wanted to throw in something
positive about the possibility of some of the really big people companies
my get there, even if the less populous teams can't manage.  I certainly
think there is plenty of room for everyone, and didn't mean to sound
discouraging.


Ben:
> For instance, I think Kris Thorisson at Reykjavik University is making a
> real stab at AGI, as are the guys at Deep Mind in the UK (Demis Hassabis,
> Shane Legg etc., with funding from Founders Fund)....  Dileep George is
> making his own effort, and will be keynoting at AGI-13 in Beijing....   So
> is Itamar Arel at U. Tennessee Knoxville (currently working on adding
> action & reinforcement to his deep learning perception system).   There
> are
> plenty of others.   These guys (like me) are not working for Google or M$
> or IBM for a reason....  We have probably all been recruited by these
> firms
> repeatedly (I know I have), but prefer to pursue our own visions rather
> than being directed by corporate bosses, even though this means we will
> have a lot less funding and a lot more hassles....   Note that none of
> these other guys are on this email list...
>
> I myself find I have little time to pay attention to this list lately,
> because I'm spending half my time working on AGI, and half my time working
> on income-generating (and hopefully eventually wealth-generating)
> narrow-AI
> stuff (principally the application of machine learning and NLP to
> financial
> prediction).

me (agb):
I'm glad you make some time for this list.  I know how busy life can get.


Ben:
> I think this list serves a useful purpose, in that someone who is utterly
> new to the AGI field can sign up and quickly find others with a common
> interest....  But please don't assume that it reflects the state of the
> art
> in non-big-corporate AGI projects !!
> Ben Goertzel
> (list founder, and former list administrator...)

I've been on this list since its beginning, so I completely understand.
I was even on the comp.ai.philosophy newsgroup back in the day, and that
was even wilder.  This one does still try to have reasonable discussion,
but of course, everyone has their own vision, and there is very little
way to convince other people.  Even with working code (such as it is),
people are general unpersuaded.  Say 'la vie'.

andi, just some guy, ya' know?




-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to