On Wed, Feb 20, 2019 at 11:34 PM Nanograte Knowledge Technologies <
[email protected]> wrote:

>
> Last, a critical thought. I have no idea why anyone would spend a useful
> life on building submarines, when those have already been perfected and are
> not what is needed in the world.
>

No one liked my starship analogy either. Are there acceptable analogies?

One can theorize, and one can make things. Theory without grounding is
poetry. Poetry is nice for inciting feelings, (or inciting riots) but poems
are not blueprints.

One can build things, but without theory, the things that are built are ..
mundane.  Practical, even. Like shaving kits and toothbrushes.  Every home
should have one.

The stuff that changes the world is a fusion of these two.  Ideas turned
into physical reality. It's difficult, its slow.  Plenty to bitch and moan
about. 1% inspiration, 99% perspiration. But that's the place I try to be:
converting ideas into things.

Here, I just updated the AtomSpace README. Its a low-level base
infrastructure. Here:

https://github.com/opencog/atomspace/blob/master/README.md

I'm looking for hands-on people, because, after a decade of work, I think
its finally cleaned up enough, coherent enough, opened up and breezy enough
that it can finally support the next level of AGI research.  Until now, its
been clotted up and bottlenecked, thrombosed with careless code, sloppy
ideas, convoluted implementation. I think that's been solved. What's left
is a structure that can accommodate many users, many ideas, many theories.
The internals can be readily expanded, fleshed out, articulated and brought
to life, in any of half-a-dozen different ways.

The result is not AGI. Its not a blue-print for AGI. It's not a proposal
for AGI.  It's not a theory of AGI. It's not a manifesto for AGI. It's not
less wrong.  It's an experimental hall, with a coupled of large,
integrated, complex machines that do generic things in simple, easy ways.

What's the correct analogy? 23rd century starship didn't work, 20th century
submarines didn't either. Lets go back in time. Let's pretend its 1832. The
AtomSpace is like a hall with a kiln for making bricks, a blast furnace for
making steel.  A rope-walk for spinning ropes. If you can't make anything
useful out of bricks and steel and ropes, I'm sorry, but that's what we've
got. (Well, you have to make the bricks and steel and rope yourself. This
is not Home Depot.)

--linas

To my mind, that is just playing it safe. What it is not, is assuming
> industry leadership and stepping out to define what AGI should become. To
> do so takes very-specific personality, and character. It requires a
> historical bigness, not petty nit picking and ridicule. This, I'm pointing
> to at our learned friend (and similar others who I have encountered here)
> who seem to think no one else in this whole, wide world has much use to
> contribute to their version(s) of AGI. I think they're sorely mistaken, but
> only time would tell.
>
> Rgds
>
> Robert Benjamin
>
>
>
> ------------------------------
> *From:* Rob Freeman <[email protected]>
> *Sent:* Thursday, 21 February 2019 12:38 AM
> *To:* AGI
> *Subject:* Re: [agi] openAI's AI advances and PR stunt...
>
> OK, that makes sense Ben. So long as you have a clear picture of how to
> progress the theory beyond temporary expediency, temporarily using the
> state-of-the-art may be strategic.
>
> So long as you are moving forward with some strong theoretical candidates
> too. If we get trapped without theory, we're blind. There are too few
> people with any broad theoretical vision for how to move forward. Too many
> script kiddies just tweaking blindly, viz, the "important step" this thread
> began with.
>
> I'm encouraged that it now appears you are deconstructing grammar and
> resolving it to a raw network level. That Linas is seeing the relevance of
> maths like category theory, which is motivated by formal incompleteness,
> speaks to this realization. (Though he may not be aware of the full import.)
>
> Deep learning does not realize this. It does not realize that formal
> description above the network level will be incomplete. I'm sure that is
> the key theoretical failure holding it back. I wish there were more people
> talking about it. If deep learning realized this they wouldn't still be
> trying to "learn" representations, whether in intermediate layers or other.
> (What was that article recently about the representation "bottle neck" idea
> in deep learning needing to be revised?)
>
> It's actually ironic that deep learning does not realize this idea that
> formal description (above the network) must always be incomplete, because
> it is also the key to the success of deep learning! The whole success of
> distributed representation is due to this. The field moved to distributed
> representation blindly, without theory, just because things started working
> better that way! But you still see articles where people say no-one knows
> why distributed representation works better! The failure of theoretical
> vision is extraordinary.
>
> But if you've deconstructed your dictionaries (throwing out your hand
> coded dictionaries?) and arrived back at the level of observation in a
> sequence network. And done it because of the theoretical realization that
> complete representation above the network level is impossible (or was it
> just an accident, trying to deconstruct symbolism to connectionism, and
> then accidentally noticing the relevance to variational theories of maths?)
> Then your group would be the only ones I've come across who have done (I
> think the Oxford thread of variational formalization, around Coecke et al.
> Grefenstette, were also seduced away by the short term effectiveness of
> deep learning on GPUs.)
>
> We need to keep (or get!) the theoretical vision.
>
> Even given a vision of formal incompleteness, you (and Pissanetzky?) may
> still be lacking a totally clear conception that the key problem is
> assembling elements in new ways all the time.
>
> Still, some focus on assembling elements in different ways (from a
> sequence network) is encouraging. There is scope to move forward.
>
> As a concrete, immediate, idea to explore moving forward, I hope you'll
> look at the idea of using oscillations to structure your sequence network
> representations. For it to be meaningful your networks will need to be
> connected in ways which directly reflect the ideas behind embedding vectors
> (without their linearities.) I don't know if that is true for your
> networks. But given that, implementation should be simple, if practically
> slow without parallel hardware.
>
> -Rob
>
> On Thu, Feb 21, 2019 at 12:03 AM Ben Goertzel <[email protected]> wrote:
>
> It's not that it's hard to feed data into OpenCog, whose
> representation capability is very flexible
>
> It's simply that deep NNs running on multi-GPU clusters can process
> massive amounts of text very very fast, and OpenCog's processing is
> much slower than that currently...
>
> *Artificial General Intelligence List <https://agi.topicbox.com/latest>*
> / AGI / see discussions <https://agi.topicbox.com/groups/agi> +
> participants <https://agi.topicbox.com/groups/agi/members> + delivery
> options <https://agi.topicbox.com/groups/agi/subscription> Permalink
> <https://agi.topicbox.com/groups/agi/T581199cf280badd7-M630f3e9428c378ac25185812>
>


-- 
cassette tapes - analog TV - film cameras - you

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T581199cf280badd7-M7989bc45bf29bb137679569c
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to