Re: [agi] Lossy Compressed Blockchain As Temporal Memory (still non-mutable)

2018-06-17 Thread Ben Goertzel
n, Jun 17, 2018 at 12:16 AM, Ben Goertzel wrote: >> > My opinion on those technologies you mention is that, while novel and >> > promising, they are still in an experimental stage. It's a huge mistake >> > to >> > assemble a system composed of multiple ex

Re: [agi] Lossy Compressed Blockchain As Temporal Memory (still non-mutable)

2018-06-16 Thread Ben Goertzel
On Sun, Jun 17, 2018 at 2:41 AM, Matt Mahoney via AGI wrote: > No, Stefan Reich and Mark Nuzz are right. Blockchain has nothing to do > with AGI and doesn't help. > > Blockchain is a terrible way to implement a distributed data store. It > is not scalable. Every node stores a copy of every

Re: [agi] Lossy Compressed Blockchain As Temporal Memory (still non-mutable)

2018-06-17 Thread Ben Goertzel
> My opinion on those technologies you mention is that, while novel and > promising, they are still in an experimental stage. It's a huge mistake to > assemble a system composed of multiple experimental or bleeding edge > technologies No it isn't. Most who try this will fail... But some subset

Re: [agi] The Singularity Forum

2018-06-14 Thread Ben Goertzel
Rubik’s > cube... if anyone else has heard of it, what do you guys think? AGI or > overblown general digraph searcher? > > Sent from ProtonMail Mobile > > > On Thu, Jun 14, 2018 at 2:06 AM, Ben Goertzel wrote: > > It's OK the Singularity is still on track ;-) On Thu, Jun 1

Re: [agi] The four things needed to solve AGI.

2018-07-24 Thread Ben Goertzel
On Fri, Jun 8, 2018 at 4:28 AM, Alan Grimes wrote: > I'm becoming increasingly horrified by how people are entertaining > Mentifex. We really really don't have any more time to waste on that > crackpot asshole. =| > Hey man, don't call Mentifex an a-hole ... from everything I can tell he is a

Re: [agi] Compressed Algorithms that can work on compressed data.

2018-10-05 Thread Ben Goertzel
wer classes. (This is how many algorithms work now that I > think about it, but they are not described and defined using the > concept of compression abstractions as a fundamental principle.) > Jim Bromer -- Ben Goertzel, PhD http://goertzel.org "The dewdrop world / Is the dewdrop world / And yet, and ye

Re: [agi] openAI's AI advances and PR stunt...

2019-02-18 Thread Ben Goertzel
is permutation "invariants" will > need to be constantly generated too. But by using permutation as his base, > the machinery is all there. Permutation is a generative process. > > -Rob > > On Mon, Feb 18, 2019 at 9:26 PM Ben Goertzel wrote: >> >> 2013 se

Re: [agi] openAI's AI advances and PR stunt...

2019-02-19 Thread Ben Goertzel
>> >>> and the dependency decisions they can inform on learned, and thus finite, >>> too. Plus you need to keep two formalisms and marry them together... Large >>> teams for all of that... >> >> >> No. I've already got 75% of it coded up.

Re: [agi] Re: Seeing through another's eyes

2019-03-18 Thread Ben Goertzel
gt; > Artificial General Intelligence List / AGI / see discussions + participants + > delivery options Permalink -- Ben Goertzel, PhD http://goertzel.org "Listen: This world is the lunatic's sphere, / Don't always agree it's real. / Even with my feet upon it / And the postman knowin

[agi] openAI's AI advances and PR stunt...

2019-02-16 Thread Ben Goertzel
now there are some cases where keeping something secret may be the most ethical choice ... but the fact that they're willing to take this step simply for a short-term one-news-cycle PR boost, indicates that open-ness may not be such an important value to them after all... -- Ben Goertzel, Ph

Re: [agi] Two Questions about Mentiflex...

2019-02-17 Thread Ben Goertzel
Steve Richfield wrote: > > Arthur, > > I have been one of your few supporters > > Artificial General Intelligence List / AGI / see discussions + participants + > delivery options Permalink -- Ben Goertzel, PhD http://goertzel.org "Listen: This world is the l

Re: [agi] openAI's AI advances and PR stunt...

2019-02-16 Thread Ben Goertzel
nobody is even attempting that. They are just tinkering. Limited to tinkering > with linear models, because nothing else can be "learned". > > On Sun, Feb 17, 2019 at 1:05 PM Ben Goertzel wrote: >> >> Hmmm... >> >> About this "OpenAI keeping th

Re: [agi] openAI's AI advances and PR stunt...

2019-02-16 Thread Ben Goertzel
> Deep learning is locked in the paradigm of learning as much as possible. Like > studying for a test by learning all the answers, rather than understanding > principles which allow you to work out your own answers. That it only learns > patterns, and does not have a principle by which new

Re: [agi] openAI's AI advances and PR stunt...

2019-02-17 Thread Ben Goertzel
speech to mean something. > > My new approach is to incorporate semantics into a rule engine right from the > start. > > On Sun, 17 Feb 2019 at 02:09, Ben Goertzel wrote: >> >> Rob, >> >> These deep NNs certainly are not linear models, and they do capture a

Re: [agi] openAI's AI advances and PR stunt...

2019-02-17 Thread Ben Goertzel
;>> Meanwhile deep learning just keeps pushing against a ceiling of what can be >>> learned. >>> >>> FWIW you can see an old and simple demo of the principle of hierarchy >>> coming out of novel rearrangements (of embeddings) at: >>> >>> demo

Re: [agi] openAI's AI advances and PR stunt...

2019-02-18 Thread Ben Goertzel
18, 2019 at 3:51 PM Rob Freeman wrote: > > On Mon, Feb 18, 2019 at 4:01 PM Ben Goertzel wrote: >> >> *** >> ... >> And likely the way to do this is to set the network oscillating, and >> vary inhibition to get the resolution of "invariants&quo

Re: [agi] The future of AGI

2019-02-09 Thread Ben Goertzel
arried out on this general-purpose list... On Sun, Feb 10, 2019 at 3:58 AM Linas Vepstas wrote: > > > > On Sat, Feb 9, 2019 at 4:22 AM Ben Goertzel wrote: >> >> >> We are now playing with hybridizing these symbolic-ish grammar >> induction methods wi

Re: [agi] The future of AGI

2019-02-09 Thread Ben Goertzel
eb 9, 2019 at 5:31 AM Ben Goertzel wrote: > > *** > > First, the threshold for recursive self improvement is not human level > > intelligence, but human civilization level intelligence. That's higher > > by a factor of 7 billion. > > *** > > > > Obviously

Re: [agi] The future of AGI

2019-02-09 Thread Ben Goertzel
for the next phase, the recursively self-improving superintelligence... -- Ben On Sun, Feb 10, 2019 at 2:46 AM Matt Mahoney wrote: > > On Sat, Feb 9, 2019 at 5:31 AM Ben Goertzel wrote: > > *** > > First, the threshold for recursive self improvement is not human level > >

Re: [agi] The future of AGI

2019-02-09 Thread Ben Goertzel
GIs you're alluding to may not need to ever happen ben On Sun, Feb 10, 2019 at 3:58 AM Linas Vepstas wrote: > > > > On Sat, Feb 9, 2019 at 4:22 AM Ben Goertzel wrote: >> >> >> We are now playing with hybridizing these symbolic-ish grammar >> induction meth

Re: [agi] The future of AGI

2019-02-10 Thread Ben Goertzel
cquire computing power or the resources (atoms and energy) it needs to grow? > > On Sat, Feb 9, 2019, 9:26 PM Ben Goertzel > >> *** >> Suppose you assembled 1000 of the smartest people in the world into a >> village and cut it off from the rest of the world. No travel in

Re: [agi] Ben Goertzel made Sophia?

2019-02-06 Thread Ben Goertzel
- film cameras - you > Artificial General Intelligence List / AGI / see discussions + participants + > delivery options Permalink -- Ben Goertzel, PhD http://goertzel.org "The dewdrop world / Is the dewdrop world / And yet, and yet …" -- Kobayashi Issa --

Re: [agi] Ben Goertzel made Sophia?

2019-02-07 Thread Ben Goertzel
t; On Thu, 7 Feb 2019 at 02:31, Ben Goertzel wrote: >> >> I did not "make" Sophia. Actually no one person made Sophia, but >> for sure the one who comes closest to deserving that title is Dr. >> David Hanson who sculpted her face and invented the animation

Re: [agi] Ben Goertzel made Sophia?

2019-02-07 Thread Ben Goertzel
> responses actually came from though. Sophia has various control systems, as I elucidated here and elsewhere http://anewdomain.net/ben-goertzel-how-sophia-the-robot-works/ and I don't know which one(s) she was using in that Will Smith interaction as I wasn't involved with that one... it wasn't

Re: [agi] openAI's AI advances and PR stunt...

2019-02-20 Thread Ben Goertzel
raining"? What do you mean by "representation"? What do you mean by >> > "contradiction"?'... >> > But if you haven't understood them, it will probably be easier to use >> > your words than argue about them endlessly. >> >> ???

Re: [agi] The future of AGI

2019-02-09 Thread Ben Goertzel
oduced some selected handful) >> * So .. Lets do it. Integrate & test. Its maybe rocket-science; but it's >> not science fiction. >> >>> >>> I hope this work continues. It would be interesting if it advances the >>> state of the art on my la

Re: [agi] The future of AGI

2019-02-09 Thread Ben Goertzel
the acceleration that could be obtained via fundamental algorithmic improvements... -- Ben G On Fri, Feb 1, 2019 at 5:17 AM Matt Mahoney wrote: > > When I asked Linas Vepstas, one of the original developers of OpenCog > led by Ben Goertzel, about its future, he responded with a blog p

Re: [agi] test

2019-06-23 Thread Ben Goertzel
gt;> Powers are not rights. >> > > Artificial General Intelligence List / AGI / see discussions + participants + > delivery options Permalink -- Ben Goertzel, PhD http://goertzel.org "Listen: This world is the lunatic's sphere, / Don't always agree it's real. / Even with my

Re: [agi] Narrow AGI

2019-08-10 Thread Ben Goertzel
quantum predictor, which produced an output from the same > distribution, and return a different symbol. Either the distribution is > uniform and you are guessing, or it's not uniform and you will do worse than > guessing. > > > On Fri, Aug 9, 2019, 9:26 PM Ben Goertzel wro

Re: [agi] Re: There is a life after Ben Goertzel

2019-09-16 Thread Ben Goertzel
ome AGI-ish components (like OpenCog-based meta-inference agents) and some more narrow-AI-ish components working together... But I'm happy to have my Singularitarian ramblings appreciated as well ;) ben -- Ben Goertzel, PhD http://goertzel.org “The only people for me are the mad ones, the ones who

[agi] Narrow AGI

2019-08-01 Thread Ben Goertzel
https://blog.singularitynet.io/from-narrow-ai-to-agi-via-narrow-agi-9618e6ccf2ce -- Ben Goertzel, PhD http://goertzel.org “The only people for me are the mad ones, the ones who are mad to live, mad to talk, mad to be saved, desirous of everything at the same time, the ones who never yawn or say

[agi] Reflections on OpenAI + Microsoft

2019-07-27 Thread Ben Goertzel
http://anewdomain.net/ben-goertzel-whats-so-disturbing-about-microsofts-openai-investment/ On Sat, Jul 27, 2019 at 6:06 PM Stefan Reich via AGI wrote: > > LOL yeah, maybe... although it is doubtful he'll succeed without having any > demos whatsoever. > > I'm thinking now we

Re: [agi] Narrow AGI

2019-08-09 Thread Ben Goertzel
tion theory results don't show there >> is no such thing as a simple learner that is universal in our physical >> universe... >> >> I'm not saying there necessarily is one, just pointing out that the math is >> not so practically applicable as your statement implies

Re: [agi] Narrow AGI

2019-08-09 Thread Ben Goertzel
> > Legg proved there is no such thing as a simple, universal learner. So we > can stop looking for one. > To be clear, these algorithmic information theory results don't show there is no such thing as a simple learner that is universal in our physical universe... I'm not saying there

Re: [agi] While you were working on AGI...

2019-07-16 Thread Ben Goertzel
rnable part > of the knowledge. My proposed solution isn't any cheaper. But good luck if > you think you can do better. > > Artificial General Intelligence List / AGI / see discussions + participants + > delivery options Permalink -- Ben Goertzel, PhD http://goertzel.org “The on

Re: [agi] can someone tell me what before means without saying before in it?

2019-09-28 Thread Ben Goertzel
ng before in the sentence > itself. "before means before." > > That is bullshit. > Artificial General Intelligence List / AGI / see discussions + participants + > delivery options Permalink -- Ben Goertzel, PhD http://goertzel.org “The only people for me are the mad

Re: [agi] can someone tell me what before means without saying before in it?

2019-09-28 Thread Ben Goertzel
may, > or may not have considered: "unstruct". > > These days I'm thinking a lot about dynamical hierarchies. > > Rob > > From: Ben Goertzel > Sent: Sunday, 29 September 2019 03:26 > To: AGI > Subject: Re: [agi] ca

Re: [agi] can someone tell me what before means without saying before in it?

2019-09-28 Thread Ben Goertzel
t-measuring, binary instrument and space being defined by the relative > objects in a select, version of relational reality - as humankind knows it. > As such, it would be qualifiable and quantifiable. > > > > > From: Ben Goertzel > Sent: Sa

Re: [agi] Whats everyones goal here?

2019-10-15 Thread Ben Goertzel
Singularity or Bust ;) On Thu, 10 Oct 2019, 19:46 , wrote: > I want to make sport robots, so I can make something I wasnt good at > pointless for everyone else to do. > > Whats u guys hitchup? > *Artificial General Intelligence List * > / AGI / see discussions

Re: [agi] The Job market.

2019-10-06 Thread Ben Goertzel
, 40 or 400 bits to really simulated our universe on a standard-issue PC with a lot of auxiliary memory... On Sun, Oct 6, 2019 at 7:32 PM TimTyler wrote: > > On 2019-10-06 06:05:AM, Matt Mahoney wrote: > > On Sun, Oct 6, 2019, 2:59 AM Ben Goertzel wrote or > > quot

Re: [agi] The Job market.

2019-10-06 Thread Ben Goertzel
exist, and we necessary observe > one where it is possible for life to evolve. > > Artificial General Intelligence List / AGI / see discussions + participants + > delivery options Permalink -- Ben Goertzel, PhD http://goertzel.org “The only people for me are the mad ones, the ones who are

Re: [agi] General Intelligence vs. no-free-lunch theorem

2020-02-04 Thread Ben Goertzel
this > contradiction? > > Thanks a lot. > > Danko > > Artificial General Intelligence List / AGI / see discussions + participants + > delivery options Permalink -- Ben Goertzel, PhD http://goertzel.org “The only people for me are the mad ones, the ones who are m

Re: [agi] General Intelligence vs. no-free-lunch theorem

2020-02-04 Thread Ben Goertzel
somewhere also... (he has an AGI team working out of Kiev...) ben On Tue, Feb 4, 2020 at 7:24 PM Danko Nikolic wrote: > > Hi Ben, > > Thanks for that information. Let me mull over it. > > Danko > > On Tue, Feb 4, 2020 at 9:57 AM Ben Goertzel wrote: >> >> A

Re: [agi] OpenAI is not so open.

2020-02-22 Thread Ben Goertzel
echnologyreview.com/s/615181/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/ > > Artificial General Intelligence List / AGI / see discussions + participants > > + delivery options Permalink > Artificial General Intelligence List / AGI / see discussions

[agi] Formal theory of simplicity

2020-09-03 Thread Ben Goertzel
, is the version of Occam's Razor that's really "as simple as possible but no simpler" where complex cognitive processing is concerned ... not coincidentally it ties closely w/ OpenCog's multi-goal-based control system and Weaver's Open-Ended Intelligence -- Ben Goertzel, PhD http://goertzel.org

Re: [agi] Formal theory of simplicity

2020-09-04 Thread Ben Goertzel
s the other like a > chain reaction. After all you need to read your own work to get more out it, > so you must make it small and organized and update it instead of making new > papers. > Artificial General Intelligence List / AGI / see discussions + participants + > deliv

Re: [agi] Formal theory of simplicity

2020-09-03 Thread Ben Goertzel
Ah yes -- but see, the theory is much simpler than the phenomena it explains ;) ... thus my invocation of the saw "simple as possible but no simpler" ;) On Thu, Sep 3, 2020 at 8:48 PM TimTyler wrote: > > On 2020-09-03 20:32:PM, Ben Goertzel wrote: > > Radical overhaul of

Re: [agi] What's With the Anti-AIT Hysteria In Language Modeling?

2020-08-31 Thread Ben Goertzel
Y "step in the right direction" would fly in the face of the pernicious > anti-Occam one-upsmanship that provides aid and comfort to enemies of > humanity occupying positions of high trust and authority in society like > Jonathan Haidt. > > On Sun, Aug 30, 2020 at 12:23 PM

Re: [agi] Formal theory of simplicity

2020-09-04 Thread Ben Goertzel
> The paper addresses what to do about the issue of there not > being any single completely satisfactory single metric of > simplicity/complexity. It proposes a solution: use an array > or such metrics and combine them using pareto optimality. > > I think that is basically correct. You are likely

Re: [agi] Formal theory of simplicity

2020-09-04 Thread Ben Goertzel
On Fri, Sep 4, 2020 at 12:24 PM Matt Mahoney wrote: > > The paper lacks an experimental results section. So did Solomonoff's original papers ;-0) > Remember that finding simple theories to fit the data is not computable, > which means that algorithms that do this well are necessarily complex.

Re: [agi] What's With the Anti-AIT Hysteria In Language Modeling?

2020-08-30 Thread Ben Goertzel
decision makers to perform > experiments on billions of unwilling human subjects with all the rigor of a > Medieval Barber's Humor Theory. > > We don't have much time left if we have any at all. > > On Sun, Aug 30, 2020 at 11:59 AM Ben Goertzel wrote: >> >> I.

Re: [agi] What's With the Anti-AIT Hysteria In Language Modeling?

2020-08-30 Thread Ben Goertzel
mpt to estimate the actual information content of these > parameters. Instead, they seem to take _pride_ in the obviously-inflated > "parameter count". > > > > > > > Artificial General Intelligence List / AGI / see discussions + participants + > delivery options Permalink

Re: [agi] What's With the Anti-AIT Hysteria In Language Modeling?

2020-08-30 Thread Ben Goertzel
do things, rather than just finding the compact model, but it's not sooo perverse if what one has on hand is precisely an algorithm for learning overdetermined accurate models in acceptable time given the hardware at hand... On Sun, Aug 30, 2020 at 9:55 AM Ben Goertzel wrote: > > I

Re: [agi] What's With the Anti-AIT Hysteria In Language Modeling?

2020-08-30 Thread Ben Goertzel
tomated induction. > > The thing that's sending civilization down a rat-hole is the failure to > recognized AIT's value as model selection. > > On Sun, Aug 30, 2020 at 11:39 AM Ben Goertzel wrote: >> >> James, have you seen Poggio's attempt to argue that these >>

Re: [agi] Re: GPT3 -- Super-cool but not a path to AGI (

2020-08-01 Thread Ben Goertzel
cake to build the particular patterns you need for abstract > reasoning on top of that. Eliza did it decades ago. The problem was it > couldn't handle ambiguity. > > -Rob > > On Sat, Aug 1, 2020 at 9:40 AM Ben Goertzel wrote: >> >> Rob, have you looked at the examples cited

[agi] GPT3 -- Super-cool but not a path to AGI (

2020-07-31 Thread Ben Goertzel
(blog post by me) https://multiverseaccordingtoben.blogspot.com/2020/07/gpt3-super-cool-but-not-path-to-agi.html -- Ben Goertzel, PhD http://goertzel.org “The only people for me are the mad ones, the ones who are mad to live, mad to talk, mad to be saved, desirous of everything at the same

Re: [agi] Re: GPT3 -- Super-cool but not a path to AGI (

2020-07-31 Thread Ben Goertzel
t; is why maybe. > > However GPT-3 definitely is close-ish to AGI, many of the mechanisms under > the illusive hood are AGI mechanisms. Like turtle > man, the limbs are there, > the eyes, the head, the but, the spine, the lungs, just doesn't look like man > so muchbut it's

Re: [agi] Re: GPT3 -- Super-cool but not a path to AGI (

2020-07-31 Thread Ben Goertzel
them all beforehand at a cost of $12M (Geoff > Hinton suggests end the search at 4.398 trillion = 2^42 :-) > > -Rob > Artificial General Intelligence List / AGI / see discussions + participants + > delivery options Permalink -- Ben Goertzel, PhD http://goertzel.org “The only

Re: [agi] Re: GPT3 -- Super-cool but not a path to AGI (

2020-07-31 Thread Ben Goertzel
e. You need to give a clear explanation with a clear example. > And only use words that others know, syntactics is kinda a bad-word. > Frequency is a better word. > Artificial General Intelligence List / AGI / see discussions + participants + > delivery options Permalink -- Ben Go

Re: [agi] What's With the Anti-AIT Hysteria In Language Modeling?

2020-07-02 Thread Ben Goertzel
On Thu, Jul 2, 2020 at 6:24 AM John Rose wrote: > > On Wednesday, July 01, 2020, at 9:02 PM, Ben Goertzel wrote: > > Basically what these NNs are doing is finding very large volumes of > simple/shallow data patterns and combining them together. Whereas there are > in fa

Re: [agi] What's With the Anti-AIT Hysteria In Language Modeling?

2020-07-02 Thread Ben Goertzel
> Can we agree that, regardless of the frontier search heuristics, it would > benefit AI, both general and narrow, to wave about the garlic of "Resource > Constraint" providing "competition classes" _within_ which metrics (of > whatever justification) are fairly compared? Yeah, while I feel

Re: [agi] What's With the Anti-AIT Hysteria In Language Modeling?

2020-07-02 Thread Ben Goertzel
what the competition should focus on... ben On Thu, Jul 2, 2020 at 12:02 PM James Bowery wrote: > > On Thu, Jul 2, 2020 at 1:23 PM Ben Goertzel wrote: >> > Can we agree that, regardless of the frontier search heuristics, it would >> > benefit AI, both general and n

Re: [agi] What's With the Anti-AIT Hysteria In Language Modeling?

2020-07-02 Thread Ben Goertzel
in this particular thread... On Thu, Jul 2, 2020 at 8:14 AM James Bowery wrote: > > > > On Thu, Jul 2, 2020 at 12:50 AM Ben Goertzel wrote: >> >> ...I.e. I believe morphisms like the ones alluded to in >> https://arxiv.org/abs/1703.04368 , https://arxiv.org/abs/1703

Re: [agi] What's With the Anti-AIT Hysteria In Language Modeling?

2020-07-01 Thread Ben Goertzel
t;> >>> Now, I wouldn't call this "anti-AIT" if it weren't for the fact that these >>> papers don't even attempt to estimate the actual information content of >>> these parameters. Instead, they seem to take _pride_ in the >>> obviously-inflated &q

Re: [agi] Re: Call for Models: Working Memory Modelathon 2020

2020-07-07 Thread Ben Goertzel
question/goal). There's no better > than this. It's closer to natural AGI than all others. > Artificial General Intelligence List / AGI / see discussions + participants + > delivery options Permalink -- Ben Goertzel, PhD http://goertzel.org “The only people for me are the mad ones,

Re: [agi] Re: Call for Models: Working Memory Modelathon 2020

2020-07-08 Thread Ben Goertzel
...will need time > to read > DAMN, so many pages.may have to read above I finish my short AGI guide, > it literally will only be like 10 pages max. > https://goertzel.org/PLN_BOOK_6_27_08.pdf > https://b-ok.org/book/2333263/7af06e > https://b-ok.org/book/2333264/207a57 &

Re: [agi] Re: Call for Models: Working Memory Modelathon 2020

2020-07-08 Thread Ben Goertzel
. Monkey has a hundred. Human has > a thousand lizard minds. > Artificial General Intelligence List / AGI / see discussions + participants + > delivery options Permalink -- Ben Goertzel, PhD http://goertzel.org “The only people for me are the mad ones, the ones who are mad to li

Re: [agi] What's With the Anti-AIT Hysteria In Language Modeling?

2020-07-03 Thread Ben Goertzel
e the limited parameters of GPTx or other. > > Did you have any success with symbolic dynamics emerging on its own terms, > without assuming it might be summarized in a grammar? > > -Rob > > On Thu, Jul 2, 2020 at 1:49 PM Ben Goertzel wrote: >> >> ... >> F

Re: [agi] What's With the Anti-AIT Hysteria In Language Modeling?

2020-07-03 Thread Ben Goertzel
gt; > On Thu, Jul 2, 2020 at 2:17 PM James Bowery wrote: >> >> Just spitballing here: >> >> Occam's Razor Models of Climate Change >> >> The competition classes would be defined in terms of the scale of the >> datasets available as well as the computer reso

Re: [agi] What's With the Anti-AIT Hysteria In Language Modeling?

2020-07-03 Thread Ben Goertzel
back and watch with glee from my > rural backwoods acreage growing chlorella with my posse of war veterans some > of whom were MIT grads? > > Because I care about people. > > Artificial General Intelligence List / AGI / see discussions + participants + > delivery options Permali

Re: [agi] What's With the Anti-AIT Hysteria In Language Modeling?

2020-07-03 Thread Ben Goertzel
> So you found grammars which adequately summarize a symbolic dynamics for > Cisco networks, and are still happy with the idea such generalizations will > capture all the important behaviour? You don't think there are some > behaviours of Cisco networks which are only explained at the network

Re: [agi] Experimental Testing of CIC (the Compression Information Criterion)

2020-07-07 Thread Ben Goertzel
On Mon, Jul 6, 2020 at 11:41 PM Ben Goertzel wrote: > > The COIN Criterion ... sounds like money, it's got to be good... Maybe we can fund the competition by making a Hollywood-style thriller about some Bitcoin criminals... lots of potentials here... > > On Mon, Jul 6, 2020 at 9

Re: [agi] Formal Language Theory Has Its Head Up Its Ass

2020-07-07 Thread Ben Goertzel
A West > journal archives, and one last paper -- bringing everything together -- > progress on which stopped when dementia started setting in after his wife > passed and he followed her. > > What might the world look like today if Faggin had used his founding role at &g

Re: [agi] Experimental Testing of CIC (the Compression Information Criterion)

2020-07-07 Thread Ben Goertzel
vs memetic > drift reaches the selective regime in time to achieve fixation against the > psychological appeal of the *IC prior. > > On Tue, Jul 7, 2020 at 1:42 AM Ben Goertzel wrote: >> >> The COIN Criterion ... sounds like money, it's got to be good... >> >> On M

Re: [agi] What's With the Anti-AIT Hysteria In Language Modeling?

2020-07-04 Thread Ben Goertzel
a little different than what you're doing, but I haven't had time/resources to pursue that direction yet... On Fri, Jul 3, 2020 at 8:26 PM Rob Freeman wrote: > > On Sat, Jul 4, 2020 at 9:47 AM Ben Goertzel wrote: >> >> ... >> Of course the grammar rules are only an abstr

Re: [agi] What's With the Anti-AIT Hysteria In Language Modeling?

2020-07-03 Thread Ben Goertzel
> > Is it true that selecting the smallest executable archive of a training > dataset corresponds to the model that is most-likely to out-predict other > models? > > Right? Isn't that the experimental program in a nutshell? Well for nontrivial datasets finding the smallest compressing program

Re: [agi] What's With the Anti-AIT Hysteria In Language Modeling?

2020-07-03 Thread Ben Goertzel
Similarly btw, the attractors in a dynamical system only capture a portion of the dynamics -- they, like emergent symbolic-dynamics grammars, are also a layer of abstraction that ignores a lot of the complexity that exists in the trajectories... On Fri, Jul 3, 2020 at 6:46 PM Ben Goertzel wrote

Re: [agi] What's With the Anti-AIT Hysteria In Language Modeling?

2020-07-01 Thread Ben Goertzel
orpus. > Therefore their (charitable) compression is to 9%. This compares to the > current LTCB leader (cmix) of 12% on only 5% of the GTP3 corpus and on only 1 > CPU week with <32GB RAM. > > > On Wed, Jul 1, 2020 at 1:14 PM Ben Goertzel wrote: >> >> Distillation of l

Re: [agi] What's With the Anti-AIT Hysteria In Language Modeling?

2020-07-01 Thread Ben Goertzel
sed > on the smallest executable archive of the data, while strong in that limit, > are weaker as an ordering metric for model quality the further one gets from > that limit, e.g. in practical benchmarks such as The Hutter Prize for > Lossless Compression of Human Knowledge." >

Re: [agi] What's With the Anti-AIT Hysteria In Language Modeling?

2020-07-01 Thread Ben Goertzel
> When we're talking about such practicalities, it behooves us to do better > than pull a lightswitch-brain maneuver and say that "the whole beautiful > mathematical house of cards falls apart" and that's that! Rationality, in > fact, demands of us taking RATIOs of rather than dealing in

Re: [agi] Re: Call for Models: Working Memory Modelathon 2020

2020-07-07 Thread Ben Goertzel
alized my achievement so I'll only > explain it if you actually really want AGI. > Artificial General Intelligence List / AGI / see discussions + participants + > delivery options Permalink -- Ben Goertzel, PhD http://goertzel.org “The only people for me are the mad ones, the ones wh

Re: [agi] Re: Goertzel's "Grounding Occam's Razor in a Formal Theory of Simplicity"

2020-06-27 Thread Ben Goertzel
nt is absurd on its face. > > On Sat, Jun 27, 2020 at 7:15 AM stefan.reich.maker.of.eye via AGI > wrote: >> >> It's a little funny when a paper on defining simplicity is a highly complex >> read... :) > > Artificial General Intelligence List / AGI / see discussions

Re: [agi] Re: Music

2020-07-17 Thread Ben Goertzel
/ AGI / see discussions + participants + > delivery options Permalink -- Ben Goertzel, PhD http://goertzel.org “The only people for me are the mad ones, the ones who are mad to live, mad to talk, mad to be saved, desirous of everything at the same time, the ones who never yawn or say a commonplace

Re: [agi] Paraconsistent Foundations for Probabilistic Reasoning, Programming and Concept Formation

2021-01-06 Thread Ben Goertzel
lifies as a dynamical > logic's 4-valued (1, i, -1, -i) approach to deriving the core of quantum > mechanics (complex probability amplitudes) as a theorem of the combinatorics > of 4 real-valued, 2x2 spinor matrices. > > On Sat, Jan 2, 2021 at 1:47 PM Ben Goertzel wrote: >>

Re: [agi] Paraconsistent Foundations for Probabilistic Reasoning, Programming and Concept Formation

2021-01-06 Thread Ben Goertzel
Hmm.. Boundary Institute webpage seems hacked or broken or something... https://www.boundaryinstitute.org/ On Wed, Jan 6, 2021 at 11:32 AM Ben Goertzel wrote: > > That link doesn't work for me ... but I'm highly interested, I wonder > how that 4-valued logic relates to the ones in cons

Re: [agi] Paraconsistent Foundations for Probabilistic Reasoning, Programming and Concept Formation

2021-01-06 Thread Ben Goertzel
I.e. that website looks unrelated to https://www.encyclopedia.com/science/encyclopedias-almanacs-transcripts-and-maps/boundary-institute-got-psi On Wed, Jan 6, 2021 at 11:34 AM Ben Goertzel wrote: > > Hmm.. Boundary Institute webpage seems hacked or broken or something... >

Re: [agi] Paraconsistent Foundations for Probabilistic Reasoning, Programming and Concept Formation

2021-01-06 Thread Ben Goertzel
t;> mechanics (complex probability amplitudes) as a theorem of the combinatorics >> of 4 real-valued, 2x2 spinor matrices. >> >> On Sat, Jan 2, 2021 at 1:47 PM Ben Goertzel wrote: >>> >>> To kick off the new year ... here is Part 2 of a trilogy of papers I'm

Re: [agi] Paraconsistent Foundations for Probabilistic Reasoning, Programming and Concept Formation

2021-01-06 Thread Ben Goertzel
>> Along the same lines, how then do you interpret imaginary quantities >> of evidence? > > > These correspond to the "case counts" comprising quantum "complex probability > amplitudes" which only "exist" as potentials as opposed to actuals. This > gets into the whole question/quandary of

Re: [agi] Paraconsistent Foundations for Probabilistic Reasoning, Programming and Concept Formation

2021-01-06 Thread Ben Goertzel
ng from the algebras immanent in multiple coupled/interpenetrating distinctions, in a way analogous to but more complex than how time arises... ben > > On Wed, Jan 6, 2021 at 2:38 PM Ben Goertzel wrote: >> >> Interesting, will reflect a bit on that... >> >> Ja

Re: [agi] Paraconsistent Foundations for Probabilistic Reasoning, Programming and Concept Formation

2021-01-07 Thread Ben Goertzel
ttps://patents.google.com/patent/US20160148110A1/en > Artificial General Intelligence List / AGI / see discussions + participants + > delivery options Permalink -- Ben Goertzel, PhD http://goertzel.org “Words exist because of meaning; once you've got the meaning you can forget the words

[agi] Metagraph morphisms and history hypertrees...

2020-12-14 Thread Ben Goertzel
t;OpenCoggy Probabilistic Programming" but I'm trying to formalize the ideas a little more rigorously here... This is all heavily motivated by Hyperon design/prototyping, i.e. wanting to get a clear understanding of what operations most badly need to be made scalable in Hyperon... be

Re: [agi] CCP as a model for AGI

2020-12-30 Thread Ben Goertzel
teractions. But whatever, I understand what is meant. On Wed, Dec 30, 2020 at 6:54 AM James Bowery wrote: > As with "AI debates" in general, people can easily talk past each other by > failing to acknowledge they are addressing different questions. Ben > Goertzel is addressing

Re: [agi] CCP as a model for AGI

2020-12-30 Thread Ben Goertzel
and supply chains work these days. > Below graph should clarify: > > > > - > regards, > The task is not impossible. > > > > ---- On Wed, 30 Dec 2020 16:31:45 +0530 Ben Goertzel > wrote > > > I don't think China's slightly higher average IQ is

Re: [agi] CCP as a model for AGI

2020-12-30 Thread Ben Goertzel
5726088713_m_-9213456288835467840_DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2> >> > *Artificial General Intelligence List <https://agi.topicbox.com/latest>* > / AGI / see discussions <https://agi.topicbox.com/groups/agi> + > participants <https://agi.topicbox.com/groups/agi/

Re: [agi] CCP as a model for AGI

2020-12-31 Thread Ben Goertzel
cture of electronics, and plenty of other interesting advantages, but the AGI advantage seems clearly to US/UK ... I'd like to understand if there are better arguments though... ben ben On Wed, Dec 30, 2020 at 8:58 AM James Bowery wrote: > > > On Wed, Dec 30, 2020 at 12:17 PM Ben Goertzel wro

Re: [agi] CCP as a model for AGI

2021-01-01 Thread Ben Goertzel
studied as a model for AGI? Given the hi-tech impetus obtained > from the covid-pandemic, to my mind they are appearing to be rapidly moving > towards becoming the first nation with a citified singularity. Certainly, we > have much to learn from the Chinese. > > > __

Re: [agi] CCP as a model for AGI

2021-01-01 Thread Ben Goertzel
On Thu, Dec 31, 2020 at 4:14 AM James Bowery wrote: > Ben I really hate it when people interject "go read this book" in a > conversation but you're a voracious enough reader that I hope you'll > forgive me when I request that you read E. O. Wilson's "The Social Conquest > of Earth" to get a

Re: [agi] CCP as a model for AGI

2021-01-01 Thread Ben Goertzel
> . I have repeatedly suggested that we hold a reverse Turing competition > (where groups pretend to be AGIs) to see where limitless intelligence might > lead, but so far NO ONE has shown any interest. > Wouldn't you expect this to work about as well as having a bunch of monkeys role-play

Re: [agi] The emergence of the AGI

2021-01-08 Thread Ben Goertzel
> > On Friday, January 08, 2021, at 6:11 PM, Matt Mahoney wrote: > > Google and Alexa will gradually become smarter > > You say they will actually be the AGI? But with a bit of luck it won't actually be Google Assistant and Alexa but some decentralized, OSS, democratically-controlled superior

Re: [agi] Paraconsistent Foundations for Probabilistic Reasoning, Programming and Concept Formation

2021-01-20 Thread Ben Goertzel
s the various OpenCog AI algos) in a common/standard/abstract way using Galois connections of chronomorphisms on directed typed metagraphs whose types are paraconsistent/probabilistic On Thu, Jan 14, 2021 at 12:31 PM Ben Goertzel wrote: > > On Thu, Jan 14, 2021 at 12:10 PM John Rose wro

  1   2   3   >