Maybe I mistyped the URL. Try http://algorithmicbotany.org/ or just type it into a search engine. Hit "papers" to get to the meat of it. http://algorithmicbotany.org/papers/ Instead of looking at the latest papers (from 2020), it might be more educational to start with the oldest ones, first. Those from the 1980's, and then move forwards. The early papers might strike you as trite or trivial; the seeming simplicity is misleading. They set up a plot that is revealed only later.
Again, regarding theorem proving: logic is built on the foundations of grammar. Natural language is built on the foundations of grammar.The 3D structure of biological molecules is built on a foundation of grammar (aka "chemistry" aka "atomic physics"). Proofs are expressions of grammars; theorem provers generate proofs from grammars; but equally important is the converse: what laws and rules can we deduce, when working in the opposite direction? For example, the axioms of algebraic topology (Eilenberg-Steenrod) are not "god-given"; they arise from the study of the structural properties of spaces. Once you have these axioms in your pocket, sure, you can generate and prove theorems with them. But who created the axioms? Well, Eilenberg... and Steenrod... ...and many many others. Human mathematicians swarmed over the problem space like ants, exploring every nook and cranny. Constructing proofs is only half the work that mathematicians do, and, as we now understand that theorem-proving is a mechanical, algorithmic process, it is indeed something that machines might reasonably excel at. The other half of what mathematicians do is to formulate theorems that are interesting .. to other humans. This turns the process of theorem-proving on it's head. Given properties of topological spaces, what are the *interesting* things one can say about them? Why are we, as humans, interesting in K-theory and cobordism, instead of (for example) what you might get if you drop additivity or exactness? I call this last process, the process of extracting a grammar from a sampling of the "physical" universe. Once you have that grammar (a collection of axioms and deduction rules), you can feed it into a theorem prover, and out fall a bunch of proofs. Great! But an absolutely key step is to obtain the grammar. Axioms are not god-given, they are discovered and formalized by humans, because the humans believe that structures they describe, the theorems they allow to be proven, are interesting and important .. to humans. Neither GPT nor ATP have the ability to discover axioms. Actually, as far as I can tell, the folks working on GPT and ATP are not even aware of this as a conceptual issue, or problem to be solved. GPT does encode grammars in an utterly opaque and non-human-understandable collection of weights in dense graphs. Insofar as that encoding is accurate, has a high-fidelity, then GPT can achieve "brilliant" results. My goal with opencog is to *not* use dense networks, but to use sparse networks, which *are* human readable, and have the form of collections of ... axioms and rules. Its the third leg of a tripod that, for whatever reason, no one seems to be aware of or acknowledge. Yes, GPT learns; it is, unfortunately, using the "wrong" representational system. Its mapping nature, which is sparse, into vector spaces, which are dense. Theorem provers start with rules and axioms, and generate sparse networks (the proofs). Until you bridge this divide, this canyon, this disconnect, you ain't got nothing. Once you do bridge it, you can say you're finally on the road to general AGI. -- Linas On Sun, Jan 17, 2021 at 3:59 PM Jon P <[email protected]> wrote: > Firstly thanks for the nice responses, it’s kind of you all to spend time > writing to me. > > > Ben it’s a really interesting point that ATP is solved in the abstract but > just computationally intractable, I suppose it’s a bit like GO, it can be > solved by breadth first search, given enough time. > > > Another interesting question it got me thinking about: what mathematics > should an AGI pursue? For example it would be possible to set a formal > system going and get it to construct arbitrary theorems by applying rules > to existing theorems, so why are some theorems more interesting than > others? I think you touch on this in an interesting way Ben when you talk > about some lemmas being more useful than others for proving other theorems, > that’s fascinating, and maybe circular. I guess being motivated by science > is a good forcing function for the direction the system should take. > > > I agree with your point about how GPT systems can sort of end up like the > ravings of a genius lunatic, sometimes precise and clear and sometimes > total nonsense. However I think the ATP setting is rather different because > the output of the system can be verified algorithmically. > > > So when GPT-3 makes up a human sentence it can’t ever know how good it is, > it’s on a grey scale from nonsense to Shakespeare. Whereas if GPT-F > produces a proof either it is correct or not and this can be checked. So in > a way the fact that a lot of what it produces is nonsense isn’t a > difficulty. In fact might it almost be a strength? A lunatic who has > strokes of genius might, in the long run, be extremely creative and > inventive, someone who never says anything dumb might be quite restricted > in what they produce. > > > Also talking of how metamath is “low level” and doesn’t use tactics much, > aren't the previously proven theorems sort of the abstractions? For example > in metamath you can press a button to see the entire proof only as a list > of axiomatic substitutions, this is almost like the binary code of a > mathematical proof. A theorem is essentially some big string of axioms > which have been compacted into a high level statement, moreover with slots > so it is modifiable. > > > So just by proving theorems isn’t the system moving up layers of > abstractions? Isn’t the database itself a stand in for “understanding” and > “sketch proofs” where a system with a big database already can produce > proofs more effectively than one which had no database and was only > restricted to using the axioms? Isn’t that the knowledge and understanding > which is growing over time? > > > I hope I have explained that reasonably, I tried to understand your nice > piece as best I could. > > > Nil that is really interesting about Godel machines. The idea of the > machine being able to prove things about its own code is very interesting I > think. As I mentioned Mario Carneiro in the metamath group is building a > system which can formally prove things on the x86 architecture so there is > a lot of overlap between that and the tools the Godel machine would need I > think. Sounds like you have made some nice progress here too. I also really > like the sound of it from a control problem perspective that “only adapt if > a new approach is provably better” gives some hope the system can be > restrained. > > > I wondered about trying to make an AI system which moved a bishop towards > a goal on a chessboard and was allowed to modify it’s movement algorithm on > condition that it could prove the bishop would never step on a white > square. I felt like this might demonstrate something interesting about the > control problem. > > > Linas, everything you are saying around the atomspace and networks with > probabilistic connections is interesting. I wonder if there is some > connection with “conjectures”, which maybe are a theorem which is only > probably true and not binary true? > > > I am afraid the link to algorithmic botany did not work so well for me, > maybe the site is down? It sounds super interesting though, I would be > interested to learn more about the foundations of biology like that. > > > Thank you all for taking the time to respond. It has taken me quite a > while to look at the things you have sent so if you respond again, and > please don’t feel any pressure to, it might take me a while to get back to > you, and I might not manage it. I don’t mean to be rude it’s just that > there is a lot, of very interesting things, to think about here. > > > On Sunday, January 17, 2021 at 8:30:17 PM UTC linas wrote: > >> I should proof-read my emails, first: >> >> On Sun, Jan 17, 2021 at 2:21 PM Linas Vepstas <[email protected]> >> wrote: >> >>> >>> I've wrapped a few things here and there. I have not daydreamed about >>> wrapping formal systems, but have not found any compelling reason to do so. >>> I strongly encourage others to do so. (Well, I strongly encourage planning >>> and strategizing, first. Good plans results in fewer disappointments.) >>> >> >> I meant to say "I have daydreamed about wrapping formal systems". High >> on my list is ASP - answer-set-programming (potasco, the potsdam solver) >> but certainly, other solvers, HOL systems, metamath are all reasonable >> candidates, and really, the question has to be "great! so what? now what do >> we do?" I have provisional answer to that, but maybe later. >> >> -- Linas >> >> -- >> Patrick: Are they laughing at us? >> Sponge Bob: No, Patrick, they are laughing next to us. >> >> >> -- > You received this message because you are subscribed to the Google Groups > "opencog" group. > To unsubscribe from this group and stop receiving emails from it, send an > email to [email protected]. > To view this discussion on the web visit > https://groups.google.com/d/msgid/opencog/5b3e3b82-2859-462e-90e4-4c370af4547bn%40googlegroups.com > <https://groups.google.com/d/msgid/opencog/5b3e3b82-2859-462e-90e4-4c370af4547bn%40googlegroups.com?utm_medium=email&utm_source=footer> > . > -- Patrick: Are they laughing at us? Sponge Bob: No, Patrick, they are laughing next to us. -- You received this message because you are subscribed to the Google Groups "opencog" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/opencog/CAHrUA36qeL%3DDqiUQzAFFs7raCF17pHpLcVLficm6q5%2BO3HE12Q%40mail.gmail.com.
