Firstly thanks for the nice responses, it’s kind of you all to spend time 
writing to me. 


Ben it’s a really interesting point that ATP is solved in the abstract but 
just computationally intractable, I suppose it’s a bit like GO, it can be 
solved by breadth first search, given enough time.


Another interesting question it got me thinking about: what mathematics 
should an AGI pursue? For example it would be possible to set a formal 
system going and get it to construct arbitrary theorems by applying rules 
to existing theorems, so why are some theorems more interesting than 
others? I think you touch on this in an interesting way Ben when you talk 
about some lemmas being more useful than others for proving other theorems, 
that’s fascinating, and maybe circular. I guess being motivated by science 
is a good forcing function for the direction the system should take.


I agree with your point about how GPT systems can sort of end up like the 
ravings of a genius lunatic, sometimes precise and clear and sometimes 
total nonsense. However I think the ATP setting is rather different because 
the output of the system can be verified algorithmically.


So when GPT-3 makes up a human sentence it can’t ever know how good it is, 
it’s on a grey scale from nonsense to Shakespeare. Whereas if GPT-F 
produces a proof either it is correct or not and this can be checked. So in 
a way the fact that a lot of what it produces is nonsense isn’t a 
difficulty. In fact might it almost be a strength? A lunatic who has 
strokes of genius might, in the long run, be extremely creative and 
inventive, someone who never says anything dumb might be quite restricted 
in what they produce.


Also talking of how metamath is “low level” and doesn’t use tactics much, 
aren't the previously proven theorems sort of the abstractions? For example 
in metamath you can press a button to see the entire proof only as a list 
of axiomatic substitutions, this is almost like the binary code of a 
mathematical proof. A theorem is essentially some big string of axioms 
which have been compacted into a high level statement, moreover with slots 
so it is modifiable.


So just by proving theorems isn’t the system moving up layers of 
abstractions? Isn’t the database itself a stand in for “understanding” and 
“sketch proofs” where a system with a big database already can produce 
proofs more effectively than one which had no database and was only 
restricted to using the axioms? Isn’t that the knowledge and understanding 
which is growing over time?


I hope I have explained that reasonably, I tried to understand your nice 
piece as best I could.


Nil that is really interesting about Godel machines. The idea of the 
machine being able to prove things about its own code is very interesting I 
think. As I mentioned Mario Carneiro in the metamath group is building a 
system which can formally prove things on the x86 architecture so there is 
a lot of overlap between that and the tools the Godel machine would need I 
think. Sounds like you have made some nice progress here too. I also really 
like the sound of it from a control problem perspective that “only adapt if 
a new approach is provably better” gives some hope the system can be 
restrained.


I wondered about trying to make an AI system which moved a bishop towards a 
goal on a chessboard and was allowed to modify it’s movement algorithm on 
condition that it could prove the bishop would never step on a white 
square. I felt like this might demonstrate something interesting about the 
control problem.


Linas, everything you are saying around the atomspace and networks with 
probabilistic connections is interesting. I wonder if there is some 
connection with “conjectures”, which maybe are a theorem which is only 
probably true and not binary true? 


I am afraid the link to algorithmic botany did not work so well for me, 
maybe the site is down? It sounds super interesting though, I would be 
interested to learn more about the foundations of biology like that. 


Thank you all for taking the time to respond. It has taken me quite a while 
to look at the things you have sent so if you respond again, and please 
don’t feel any pressure to, it might take me a while to get back to you, 
and I might not manage it. I don’t mean to be rude it’s just that there is 
a lot, of very interesting things, to think about here. 


On Sunday, January 17, 2021 at 8:30:17 PM UTC linas wrote:

> I should proof-read my emails, first:
>
> On Sun, Jan 17, 2021 at 2:21 PM Linas Vepstas <[email protected]> wrote:
>
>>
>> I've wrapped a few things here and there. I have not daydreamed about 
>> wrapping formal systems, but have not found any compelling reason to do so. 
>> I strongly encourage others to do so. (Well, I strongly encourage planning 
>> and strategizing, first. Good plans results in fewer disappointments.)
>>
>
>  I meant to say "I have daydreamed about wrapping formal systems". High on 
> my list is ASP - answer-set-programming (potasco, the potsdam solver) but 
> certainly, other solvers, HOL systems, metamath are all reasonable 
> candidates, and really, the question has to be "great! so what? now what do 
> we do?"  I have provisional answer to that, but maybe later.
>
> -- Linas
>
> -- 
> Patrick: Are they laughing at us?
> Sponge Bob: No, Patrick, they are laughing next to us.
>  
>
>

-- 
You received this message because you are subscribed to the Google Groups 
"opencog" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/opencog/5b3e3b82-2859-462e-90e4-4c370af4547bn%40googlegroups.com.

Reply via email to