This sums up google great: http://twilightoftheidols.org/google-is-garbage/

On Wed, May 1, 2013 at 6:40 PM, Samantha Atkins <[email protected]> wrote:

>
>
>
> On Sat, Apr 20, 2013 at 1:35 PM, Mike Archbold <[email protected]>wrote:
>
>> I think Ray Kurzweil is up to something AGI-ish at google, or so I
>> gather from skimming the news.  Maybe this was mentioned in the other
>> thread which I haven't read yet.  It sounded like one of a number of
>> "10 years and it will work" projects that have surfaced in the news
>> lately.  Mike A
>>
>
>
> I am not a Google insider but from what I have seen from outside and
> reading between the lines they are working hard on:
> 1) SIRI killer - full speech understanding by computational systems or as
> close as they can get
> 2) much better NLP
> 3) using their Big Data skills and huge corpus to statistically brute
> force a lot of information extraction and narrow AI type results
> 4) using (2) and (3) to build then next gen super Watson type
> functionality and incorporating that into current and future projects.
>
> Offhand I don't see that any of that gives actual AGI although it will
> arguably make all of us effectively smarter if done well.
>
> I agree very much with Ben.  Large company skunkworks on more or less blue
> sky projects are notoriously fickle and prone to having their bubble burst
> on the rocks of corporate politics.  Alternately if you can get a gig where
> basically they just pay you and a team without much hassle for a few years
> then the research is so locked down secret to the world and even to 99% of
> the company that that generates its own difficulties.
>
>
> - s
>
>>
>> On 4/20/13, Logan Streondj <[email protected]> wrote:
>> > It's true, Ben is really like an angel investor,
>> > since he has a high profile AGI project,
>> > and invests some time in interacting,
>> > with our low-profile AGI projects,
>> > or even AGI ideas for some.
>> >
>> >
>> > On Fri, Apr 19, 2013 at 9:35 PM, Ben Goertzel <[email protected]> wrote:
>> >
>> >>
>> >> Andrew...
>> >>
>> >> [changed the thread name]
>> >>
>> >> On Fri, Apr 19, 2013 at 10:37 PM, Andrew G. Babian
>> >> <[email protected]>wrote:
>> >>
>> >>>
>> >>> So to throw something somewhat more positive out there,  I just looked
>> >>> at
>> >>> the website of the people working at Google Research.  They've got
>> >>> literally tons of people in areas like machine perception, AI, machine
>> >>> learning, machine translation.  It does give me the feeling that there
>> >>> are
>> >>> people, and with enough plugging, they will eventually get AGI as
>> just a
>> >>> natural progression.  Of, course, I think they field and the stuff
>> they
>> >>> use
>> >>> has some missing bits, but that's just me.  You all can tilt at all
>> the
>> >>> windmills y'all want, but they have money and talent at a level we
>> can't
>> >>> approach.
>> >>>
>> >>
>> >> I have visited Google Research in Mountain View a number of times, and
>> I
>> >> know a bunch of researchers there fairly well...
>> >>
>> >> Of course their staff are intelligent and talented and so forth....
>>  And
>> >> they are well paid and have a lot of data and computing resources.
>> >>
>> >> I don't think their staff are supernaturally talented or anything like
>> >> that....  Some of the folks I am working with on AI, in Hong Kong and
>> >> Addis
>> >> Ababa, are every bit as talented and clever as the Google Research
>> >> staff....  Silicon Valley does not have a monopoly on brilliant tech
>> >> talent, though they may well have the world's best publicists ;-) ...
>> >>
>> >> In the end, only a very miniscule portion of the resources of Google --
>> >> or
>> >> any other current large tech company -- is oriented toward AGI in any
>> >> direct or semi-direct way.  When AGI is pursued within these firms,
>> it's
>> >> currently in teeny-tiny skunkworks projects....  And these skunkworks
>> >> projects tend to get quasi-randomly dissolved when corporate priorities
>> >> change (e.g. Sam Adam's now-dormant Joshua Blue AGI project at IBM;
>> some
>> >> previous Google AGI skunkworks projects I know about via personal
>> >> commmunications...)
>> >>
>> >> So, consider the two possibilities:
>> >>
>> >> A)
>> >> A large company with a teeny skunkworks AGI team, plus a lot of smart
>> >> guys
>> >> working on other projects peripherally related to AGI
>> >>
>> >> B)
>> >> A small team working outside any large company or institution, with
>> >> uncertain but non-zero funding, but focused directly on AGI
>> >>
>> >> ... Is it really so obvious that A is going to get to the end goal
>> before
>> >> B?  I don't think so....  Based on general common sense, it seems
>> either
>> >> one is possible....
>> >>
>> >> There is, of course, a scientific question here: Whether AGI can be
>> >> achieved by basically integrating a bunch of components created for
>> >> non-AGI
>> >> purposes, with some sort of relatively simple "AGI controller" layered
>> on
>> >> top of it....  I personally don't think this can work.   I think that
>> >> even
>> >> if the **ideas** underlying a bunch of narrow-AI components are
>> >> sufficient
>> >> to guide the creation of modules of an AGI system, in actual practice,
>> >> the
>> >> way narrow-AI systems are written generally precludes their integration
>> >> into an AGI framework....   Integrating components into an AGI
>> framework
>> >> generally requires allowing each component to infuse knowledge and
>> >> guidance
>> >> into the others at a deep level, and generally narrow-AI software is
>> not
>> >> designed or coded to allow this; and redesigning a piece of narrow-AI
>> >> software in such a way requires a lot of deep thinking as well as hard
>> >> engineering....   I have been involved with this sort of work a lot...
>> >>
>> >> Finally, and hopefully without being insulting to anyone, I would like
>> to
>> >> point out that the folks who post on this list are not remotely
>> >> representative of the community of "AGI researchers unaffiliated with
>> >> large
>> >> corporations." ....  The folks who choose to spend a lot of time
>> reading
>> >> and writing on AGI e-mail lists form a quite particular sub-population.
>> >> On
>> >> average, they tend to have fewer professional qualifications and less
>> >> funding for their work, than plenty of other AGI researchers out
>> there...
>> >>
>> >> For instance, I think Kris Thorisson at Reykjavik University is making
>> a
>> >> real stab at AGI, as are the guys at Deep Mind in the UK (Demis
>> Hassabis,
>> >> Shane Legg etc., with funding from Founders Fund)....  Dileep George is
>> >> making his own effort, and will be keynoting at AGI-13 in Beijing....
>> >> So
>> >> is Itamar Arel at U. Tennessee Knoxville (currently working on adding
>> >> action & reinforcement to his deep learning perception system).   There
>> >> are
>> >> plenty of others.   These guys (like me) are not working for Google or
>> M$
>> >> or IBM for a reason....  We have probably all been recruited by these
>> >> firms
>> >> repeatedly (I know I have), but prefer to pursue our own visions rather
>> >> than being directed by corporate bosses, even though this means we will
>> >> have a lot less funding and a lot more hassles....   Note that none of
>> >> these other guys are on this email list...
>> >>
>> >> I myself find I have little time to pay attention to this list lately,
>> >> because I'm spending half my time working on AGI, and half my time
>> >> working
>> >> on income-generating (and hopefully eventually wealth-generating)
>> >> narrow-AI
>> >> stuff (principally the application of machine learning and NLP to
>> >> financial
>> >> prediction).
>> >>
>> >> I think this list serves a useful purpose, in that someone who is
>> utterly
>> >> new to the AGI field can sign up and quickly find others with a common
>> >> interest....  But please don't assume that it reflects the state of the
>> >> art
>> >> in non-big-corporate AGI projects !!
>> >>
>> >> --
>> >> Ben Goertzel
>> >> (list founder, and former list administrator...)
>> >>
>> >>
>> >>
>> >>
>> >>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
>> >> <https://www.listbox.com/member/archive/rss/303/5037279-a88c7a6d> |
>> >> Modify<https://www.listbox.com/member/?&;>Your Subscription
>> >> <http://www.listbox.com>
>> >>
>> >
>> >
>> >
>> > -------------------------------------------
>> > AGI
>> > Archives: https://www.listbox.com/member/archive/303/=now
>> > RSS Feed:
>> https://www.listbox.com/member/archive/rss/303/11943661-d9279dae
>> > Modify Your Subscription:
>> > https://www.listbox.com/member/?&;
>> > Powered by Listbox: http://www.listbox.com
>> >
>>
>>
>> -------------------------------------------
>> AGI
>> Archives: https://www.listbox.com/member/archive/303/=now
>> RSS Feed: https://www.listbox.com/member/archive/rss/303/2997756-fc0b9b09
>>
>> Modify Your Subscription: https://www.listbox.com/member/?&;
>> Powered by Listbox: http://www.listbox.com
>>
>
>    *AGI* | Archives <https://www.listbox.com/member/archive/303/=now>
> <https://www.listbox.com/member/archive/rss/303/10514698-9a8cda1e> |
> Modify<https://www.listbox.com/member/?&;>Your Subscription
> <http://www.listbox.com>
>



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to