On Sun, Aug 2, 2020 at 1:58 AM Ben Goertzel wrote:
> ...
> ...I also think that the search for concise
> abstract models is another part of what's needed...
>
It depends how you define "concise abstract model". Even maths has an
aspect of contradiction. What does Chaitin call his measure of
"If you could predict the next word.."
"any information and therefore it wouldn't be said.."
And how does that help prediction? It doesn't.
No, use common words when you write books and replies. It helps even the author.
--
Artificial General
immortal.discover...@gmail.com wrote:
> I don't know how you guys have been working on AGI for 30+ years and
> still can't say anything about how to actually predict the next word
> clearly using common words and using few words so any audience can
> quickly learn what you know. Permalink
>
I don't know how you guys have been working on AGI for 30+ years and still
can't say anything about how to actually predict the next word clearly using
common words and using few words so any audience can quickly learn what you
know.
Why can't you say your AGI like this below? They all build
On Sat, Aug 1, 2020 at 7:08 PM Matt Mahoney wrote:
>
> On Fri, Jul 31, 2020, 10:00 PM Ben Goertzel wrote:
>
>> I think "mechanisms for how to predict the next word" is the wrong
>> level at which to think about the problem, if AGI is your interest...
>>
>
> Exactly. The problem is to predict
On Fri, Jul 31, 2020, 10:00 PM Ben Goertzel wrote:
> I think "mechanisms for how to predict the next word" is the wrong
> level at which to think about the problem, if AGI is your interest...
>
Exactly. The problem is to predict the next bit. I mean my interest is in
compression, but you still
How many billion parameters do PLN and TLCG have?
Applications of category theory by Coecke, Sadrzadeh, Clark and others in
the '00s are probably also formally correct.
As were applications of the maths of quantum mechanics. Formally. Does
Dominic Widdows still have that conference?
Contradictions are an interesting and important topic...
PLN logic is paraconsistent, which Curry-Howard-corresponds to a sort
of gradual typing
Intuitionistic logic maps into Type Logical Categorial Grammar (TLCG)
and such; paraconsistent logic would map into a variant of TLCG in
which there
Ben,
By examples do you mean like array reversal in your article?
I agree. This problem may not be addressed by their learning paradigm at
all.
But I disagree this has been the biggest problem for symbol grounding.
I think the biggest problem for symbol grounding has been ambiguity.
Manifest
All these papers confuse it and elongate itits so simple!
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T21c073d3fe3faef0-Ma754039104df6f791b6decd6
Delivery options:
I want the actual freakin reason why the next word is predicted, for example:
Bark is seen follow dog 44 times (dog bark, dog bark...), sleep 4 times (dog
sleep)so dog>bark is probable
--
Artificial General Intelligence List: AGI
Permalink:
Yous can't even explain how to predict the next word, little own clearly. Go
on, tell me how, we will start from there.
Above I explained how to do some pretty ok prediction. Brains are basically
made of syntax and semantics.
--
Artificial General
OMG, Goertzel is actually on point... yeah, Wheatley can be frustrating
in a number of ways. I think I got to his core motivation, which I don't
find objectionable in any significant way but he is still quite
solipsistic about his intention to goo the planet and seems to need to
develop his
I think "mechanisms for how to predict the next word" is the wrong
level at which to think about the problem, if AGI is your interest...
On Fri, Jul 31, 2020 at 6:47 PM wrote:
>
> None of yous are giving a list of mechanisms for how to predict the next word
> like I did above. You need to give
None of yous are giving a list of mechanisms for how to predict the next word
like I did above. You need to give a clear explanation with a clear example.
And only use words that others know, syntactics is kinda a bad-word. Frequency
is a better word.
--
Rob, have you looked at the examples cited in my article, that I
linked here? Seeing this particular sort of stupidity from them,
it's hard to see how these networks would be learning the same sorts
of "causal invariants" as humans are...
Transformers clearly ARE a full grammar learning
On Sat, Aug 1, 2020 at 3:52 AM wrote:
> ...
> Semantics:
> If 'cat' and 'dog' both share 50% of the same contexts, then maybe the
> ones they don't share are shared as well. So you see cat ate, cat ran, cat
> ran, cat jumped, cat jumped, cat licked..and dog ate, dog ran, dog ran.
>
I was interested to learn that transformers have now completely abandoned
the RNN aspect, and model everything as sequence "transforms" or
re-orderings.
That makes me wonder if some of the theory does not converge on work I like
by Sergio Pissanetzky, which uses permutations of strings to derive
There has to be a theory of understanding, reasoning, and judging at a
minimum underlying an aspiring AGI design. There is always going to be
a certain trick bag in any AI. If it looks like it is mostly a bag of
tricks, even though they might be REALLY REALLY GOOD tricks, it won't
get you to
Because it seems GPT-2/3 must be using several mechanisms like the ones that
follow else it has no chance at predicting well:
P.S. Attention Heads isn't listed below, that's an important one, it can ex.
predict a last name accurately by only looking at certain words regardless of
all others
What is your justification/reasoning behind saying
"However GPT-3 definitely is close-ish to AGI, many of the mechanisms
under the illusive hood are AGI mechanisms."
?
I don't see it that way at all...
On Fri, Jul 31, 2020 at 12:39 PM wrote:
>
> Follows is everything I got out of that long
21 matches
Mail list logo