> Did some reading. OpenCog is not a single network is it. It's a collection of 
> separate modules.


This is not really a good way to put it.  OpenCog's Atomspace
knowledge-store is a single network.  There are multiple learning and
reasoning processes that act concurrently and synergetically on this
network.

ben

On Wed, Jul 8, 2020 at 12:22 AM <immortal.discover...@gmail.com> wrote:
>
> Ah, good thing I already read Gary Marcus's article. One link down. Yes GPT-2 
> lacks but I got those things covered. There's no doubt that GPT-2 is the 
> foundation and those issues are solved if we look back into GPT-2 / the 
> hierarchy. Yes, we often hear new knowledge on the internet, understand it / 
> agree it, but couldn't predict it prior, but, it "does" align/predict with 
> our knowledge. that's why we understand/agree it. If I learn something by 
> reading it, and like it, that means my brain was originally likely to have 
> generated it myself, just I didn't get around to it, my attention was 
> elsewhere.
>
>
> Did some reading. OpenCog is not a single network is it. It's a collection of 
> separate modules. I don't agree with this... BTW even GPT-2 is not a single 
> net (although functions like one basically), I had a good look inside, it's a 
> stack of all sorts of BS. My AGI design is very large and completely unified 
> in a single net/hierarchy, or 2 at most (hierarchy+heterarchy). You'll see 
> why unified/1net is so key to AGI, all the mechanisms literally blend/mix 
> together to give you "more data".
>
>
> "the contemporary AI community still gravitates towards benchmarking 
> intelligence by comparing the skill exhibited by AIs and humans at specific 
> tasks, such as board games and video games. We argue that solely measuring 
> skill at any given task falls short of measuring intelligence, because skill 
> is heavily modulated by prior knowledge and experience: unlimited priors or 
> unlimited training data allow experimenters to “buy” arbitrary levels of 
> skills for a system, in a way that masks the system’s own generalization 
> power"
>
> Totally get this, you can but intelligence, but this is the way to evaluate 
> most of AGI, to solve the evaluation issue they mention well that's why the 
> Hutter Prize's Lossless Compression evaluation uses a set-size dataset and 
> network (well, not net, but they are watching how big your net is getting! 
> Too big and your score goes down.) so that you can't buy intelligence and 
> must generalize better.
>
>
> I'm reading the Paper you linked now. and 3 of your books....will need time 
> to read....
> DAMN, so many pages.....may have to read above I finish my short AGI guide, 
> it literally will only be like 10 pages max.
> https://goertzel.org/PLN_BOOK_6_27_08.pdf
> https://b-ok.org/book/2333263/7af06e
> https://b-ok.org/book/2333264/207a57
> Artificial General Intelligence List / AGI / see discussions + participants + 
> delivery options Permalink



-- 
Ben Goertzel, PhD
http://goertzel.org

“The only people for me are the mad ones, the ones who are mad to
live, mad to talk, mad to be saved, desirous of everything at the same
time, the ones who never yawn or say a commonplace thing, but burn,
burn, burn like fabulous yellow roman candles exploding like spiders
across the stars.” -- Jack Kerouac

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T604ad04bc1ba220c-M51ed9ba9427bd5fd6645b2ff
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to