Thanks so much for your nice response Mario. I think the work you’re doing 
is really awesome and I’m very thankful that you would take your time to 
explain things to me. I feel similarly with everyone else in this 
community, I’m so impressed by everyone here.

I dig the idea that each independent check decreases the probability of 
being wrong, so I can see how formal proofs in multiple proof assistants on 
multiple machines makes it unlikely that MM0 is flawed, that makes a lot of 
sense. I also see why having MM0 be as small as possible is very good. I 
also guess there are philosophical problems around creating something 
provably logically infallible.

Can I ask about how MMC proofs work? So I had a little look at formal 
verification for software and it showed techniques like exhaustively 
searching the state space or using automated induction, are you thinking 
along these lines where the computer will run a standard algorithm for each 
program?

Or is it more like an MM proof where there are explicit steps? For example 
Prime(X) is the sort of function which I can’t really imagine checking in 
an automated way as there are so many inputs, though maybe there is some 
nice induction depending on how it works (a sieve of Eratosthenes might be 
amenable to an inductive proof for example). 

Does the user have to specify the steps themselves or can the compiler help 
generate them or is it a totally different system? If MMC produces proofs 
which are checkable by MM0 does that mean they are similar to MM proofs in 
style? It’s interesting to me that MM0 can check both MM proofs and x86 
proofs and yet remain small, that is very intriguing.

If so there has been a lot of work to build up the set.mm database of 
proofs, is there a similar process to build up MMC functions and their 
associated proofs? For example given a verified addition function you could 
build a multiplication function and then a function for the multiplication 
of matrices or something.

I think these formal methods are obviously the future of mathematics so 
it’s really cool to be able to watch them develop. I also think there may 
be interesting applications to The AI Control Problem, for example letting 
an AI expand its capabilities iff it can prove the expanded system is 
constrained in the same way it is constrained itself. There is some similar 
paradigm to what you are doing about a system verifying itself (or it's 
offspring). This sounds like it might make it feasible to control an 
arbitrarily sophisticated system which is cool because without control 
there are a lot of worries. I asked Stuart Russel about it in a Reddit AMA 
and he said he’s considered formal proofs as a method for AI Control which 
is interesting. I’ve watched enough Stargate SG-1 to know I don’t want to 
have to deal with replicators.

-- 
You received this message because you are subscribed to the Google Groups 
"Metamath" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/metamath/af2c6665-7a04-4439-a0d9-ee57ed840d0do%40googlegroups.com.

Reply via email to