Ya, you could think of every of those tasks as being a word, like color, shape,
move, scale, duplicate, and if you see it enough, you write about it more, so,
if you see a new cube and saw "move left" many times, your new cube will
predict next "move left", either the cube decodes itself/
You can see here I wrote a dozen tasks: pattern, untilHits, denoise, move,
change color, rotate, duplicate, scale,
keepPositinosAbit, inflate screen as object that has
most counts ignore position size etc, copy pattern, laser, advanced laser,
fill-in, outline, ev oth outline, conect objects,
^
I'm still trying to figure out how this can be incorporated into a GPT-2/ IGPT.
It's like an Internet of Things. It's prediction, and it uses patterns, but it
doesn't seem like it can answer questions. Yet my brain can solve the tests. Ok
I gave it thought. Some of the tests are not physics
https://arxiv.org/pdf/1911.01547.pdf
See those images? I know it's prediction, like GPT-2 or IGPT. But how do these
tests guide us to AGI? I know I can do them, so AGI must. But. How can they
help solve cancer etc? GPT-2 says answers. IGPT for video would too. These
tests, however, seem more
Nice tests (images) at bottom of this link:
https://arxiv.org/pdf/1911.01547.pdf
I recognize these in text. Same tests can be done in text. I'd like to see all
400 tests. I found the github but they are in JSON.
Also these text questions, really useful:
https://arxiv.org/pdf/1803.05457.pdf
One of the lizard mind is conscious mind and all others are sub conscious
minds. Petty much clones
The extra circuity is to start and stop them. In a simulated realty they act
as swarm of VR cameras seeing
all angles of there synthesized local environment .
This Neural Network Creates 3D
On Wednesday, July 08, 2020, at 10:30 AM, James Bowery wrote:
> The "surprise" simply means bits were added to the corpus of the intelligence.
I disagree. What if it already stored the exact phrase "thank you", and heard
someone say it? It'd strengthen the connections / update the frequency,
https://www.youtube.com/watch?v=1l8Jx3koXHM
But... not quite. I believe some subtle things happened btw the
evolution of apes and humans, even, in the interface btw cortex and
hippocampus allowing better representation of recursive reflection and
abstraction...
AGI is made up of lizard minds. One lizard mind needs five temporal memory
areas. First is a raw circular memory.
Second is a Markov graph memory. Third is past memory, now memory, and future
memory. A
Blender algorithm would here.
Fourth is a comparative memory that compares the relationship
On Wed, Jul 8, 2020 at 12:40 AM Ben Goertzel wrote:
> Gary Marcus's article explains quite clearly why and how GPT2 fails to
> approach human-like AGI,
>
> https://thegradient.pub/gpt2-and-the-nature-of-intelligence/
>
> He also explains the fallacy of simplistically claiming that
> prediction =
> Did some reading. OpenCog is not a single network is it. It's a collection of
> separate modules.
This is not really a good way to put it. OpenCog's Atomspace
knowledge-store is a single network. There are multiple learning and
reasoning processes that act concurrently and synergetically on
/ or was generated, but ignored that word or understood it internally to be
after
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T604ad04bc1ba220c-Mf40dfe8b0b8f63c11c3cf37a
Delivery options:
my phrase-node must have been rolling out, and that word in it was adapted to
the past data
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T604ad04bc1ba220c-M3c693280927200744df97f44
Delivery options:
such interesting typoabove.i meant after :P
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T604ad04bc1ba220c-M9ec5a98242e465aa0151bb0f
Delivery options: https://agi.topicbox.com/groups/agi/subscription
Ah, good thing I already read Gary Marcus's article. One link down. Yes GPT-2
lacks but I got those things covered. There's no doubt that GPT-2 is the
foundation and those issues are solved if we look back into GPT-2 / the
hierarchy. Yes, we often hear new knowledge on the internet, understand
Gary Marcus's article explains quite clearly why and how GPT2 fails to
approach human-like AGI,
https://thegradient.pub/gpt2-and-the-nature-of-intelligence/
He also explains the fallacy of simplistically claiming that
prediction = understanding
The merits or demerits of OpenCog are a different
Make sure to read my above post.
Really? You don't see how Blender (or my improvement above) is closer to AGI
than GPT-2 is? Or that GPT-2 is close-ish to AGI? Do you have something better?
Does it predict text/images better? What does OpenCog AI do if it can't compare
to OpenAI's showcase!?
Yeah we @ SingularityNET have been using Blender, and conditioning
Blender on other specialized corpora, in some application work.
However I don't see how this is directly useful for AGI, though it's
cool for narrow-AI application work...
On Tue, Jul 7, 2020 at 5:27 AM wrote:
>
> Have you seen
You will get that in my upcoming guide but for now try this explanation (2
parts to it):
ROOT FORCE: I'll trust yous already know GPT-2 and the even cooler Blender. My
discovery to improve Blender is: These AIs collect lots of diverse/general data
(explores), but lots of it doesn't answer
Pleas explain it like i am a fiver year old.
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/T604ad04bc1ba220c-Mc75ef5f73e57ee87f61224b3
Delivery options: https://agi.topicbox.com/groups/agi/subscription
Have you seen PPLM, CTRL, and Blender? They all do the same thing but are an
improvement on GPT-2. Blender is the farthest, it both controls the generation,
plus is trained on chat logs, wiki, and empathy, plus finishers its reply to
you.
I can build on Blender. No one yet has realized my
21 matches
Mail list logo