Re: [agi] Explain your AGI

2020-02-17 Thread Matt Mahoney
It would be fairly simple to write a server-client implementing competitive message routing for a text based message pool. I could probably do it in a couple weeks. But then comes the hard part. In order for it to be useful you need a large user base, otherwise nobody will use it. Google couldn't g

Re: [agi] Explain your AGI

2020-02-16 Thread James Bowery
Nick Szabo's blog and its mentions of Kolmogorov Complexity On Sun, Feb 16, 2020 at 8:00 PM James Bowery wrote: > OK. I was just kind of hoping it was close enough to at least turn into > an open source project,

Re: [agi] Explain your AGI

2020-02-16 Thread James Bowery
OK. I was just kind of hoping it was close enough to at least turn into an open source project, given you drew a comparison with Bitcoin. So I suppose this is more along the lines of a "white paper" for such? Quite seriously, Nick Szabo (bitgold white paper author precursor to Satoshi Nakamoto's

Re: [agi] Explain your AGI

2020-02-16 Thread Matt Mahoney
On Sun, Feb 16, 2020, 6:31 PM James Bowery wrote: > > > On Sat, Feb 15, 2020 at 7:34 PM Matt Mahoney > wrote: > >> My 2008 design for distributed AGI. http://mattmahoney.net/agi2.html >> > > How far is this from a specification you would accept from a student? > I wouldn't give this type of ass

Re: [agi] Explain your AGI

2020-02-16 Thread immortal . discoveries
I like how Matt shows us humans as nodes in a high-dimensional space like Word2Vec where we have a semantic web of human experts and so when someone has a question they can find related expert/writers that have answered such questions. It's all about routing the questions to the right people. --

Re: [agi] Explain your AGI

2020-02-16 Thread immortal . discoveries
To me it's worth a lot but all I need is 10 words, I don't need a whole page, I already wanted a better sharing system years ago. Is anyone else game to try it with just us? I know we already post our questions here but seemingly there is a more efficient way, ? We have too many connections, we

Re: [agi] Explain your AGI

2020-02-16 Thread James Bowery
On Sat, Feb 15, 2020 at 7:34 PM Matt Mahoney wrote: > My 2008 design for distributed AGI. http://mattmahoney.net/agi2.html > How far is this from a specification you would accept from a student? -- Artificial General Intelligence List: AGI Permalink: htt

Re: [agi] Explain your AGI

2020-02-16 Thread immortal . discoveries
What is funny is those 2 'halfs' are our only gateway to invent AGI. So yes Matt, Message Routing / improving our friend Network is 50% of our power to invent AGI. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/

Re: [agi] Explain your AGI

2020-02-16 Thread immortal . discoveries
What's funny is we can already make discoveries in our brains, we're all good. And we try to improve our ability at that. But our chit chats on this mailing list / forum are the other half for sharing Already Discovered facts, which we can improve at as well. What's funny is any of our messages

Re: [agi] Explain your AGI

2020-02-16 Thread stefan.reich.maker.of.eye via AGI
I agree with the basic idea of Competitive Message Routing. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tb135fde0a846b51f-M2af307778d4fe86f5396ea0a Delivery options: https://agi.topicbox.com/groups/agi/subscrip

Re: [agi] Explain your AGI

2020-02-16 Thread immortal . discoveries
For example: You could group us, 4 deep learners,  3 servers, 6 coders. You could stack us in entailment ex. designer>tester>fabricator>seller. -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tb135fde0a846b51f-Me5

Re: [agi] Explain your AGI

2020-02-15 Thread immortal . discoveries
Indeed humans are nodes and are best connected to the right friends. We gotta compress this net < and then extract some goodies! -- Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tb135fde0a846b51f-M4cf14d28062e4

Re: [agi] Explain your AGI

2020-02-15 Thread immortal . discoveries
Let's try it, you say it is so good as well. So we need to say what we specialize in to direct our questions :). For example, I'm the designer, Korrelan is the Deep Learner/net guru, Matt's the compressor evaluator, JB is the social verifier, Ben is the seller, JR is the theorist (sorry :p), and

Re: [agi] Explain your AGI

2020-02-15 Thread immortal . discoveries
It is true that for us to make AGI (or for AGI to make tools) we/it need make not only our own discoveries/tools well, but also share already made queries/tools to the right spots just like you say. I'm a generator/finder making an, generator/finder :). Im' sure we can work out heir sharing c-op

Re: [agi] Explain your AGI

2020-02-15 Thread immortal . discoveries
Ya Matt, sorta like if I have questions about Deep Learning (which I do), I could get some fast answers if we had that, right? Of course we won't get everyone to do it. And, it won't do as much as inventing AGI will. And, we Do have AI experts HERE (or somewhere nearby) already, just enough to a

Re: [agi] Explain your AGI

2020-02-15 Thread Matt Mahoney
My 2008 design for distributed AGI. http://mattmahoney.net/agi2.html The idea is to have a lot of narrow AI experts and a network for routing messages to the right ones, so that to the user it appears as one big expert on all topics. Because of the high cost (USD $1 quadrillion, or 15 years world

Re: [agi] Explain your AGI

2020-02-14 Thread immortal . discoveries
Before you do any real world R&D / experiments, you need to research a lot and form hypothesizes and data extraction. 'Walkers' can't think about Problems not in front of themselves. You need a mind first and usually. -- Artificial General Intelligence List

Re: [agi] Explain your AGI

2020-02-14 Thread immortal . discoveries
Huh? Go back to where? The 3D simulated 'walkers'? And what do you expect them to do? Gather data. Then try experiments. Repeat. Which in effect let's them invent desired tools/procedures. But we aren't even ready to try experiments without the 'thinker' inside the noggin. And as I said, data in

Re: [agi] Explain your AGI

2020-02-14 Thread Alan Grimes via AGI
immortal.discover...@gmail.com wrote: > My basic plan is to make a really, really, good predictor for Text, > using Lossless Compression for evaluation obviously, and it should > 'talk' like GPT-2 and even better. BAKA! Text only seems easy because doing absurdly trivial computations on it has b