The following thoughts started with speculation about the tools AIs might 
use. But AIs will change programming in ways that we cannot imagine. AI 
programmers will be alien beings. AIs will probably not use programming 
languages or IDEs.

AIs *might* be able to back-translate their work into python, but I suspect 
that capability will be irrelevant. Let's look at the question in other 
ways:

AI programmers will likely never make *blunders *like misspellings. They 
will be "perfect" in much the same way as Alpha Go is. AIs will make *global 
inferences* about programs, like a super mypy. As a result, AI-written apps 
will be *flexible*. That is, assuming AIs will write apps :-)

*Emulating AIs*

How can we make our programs less error-prone and more flexible? A 
surprising answer arose. We can *limit* the form our programs can take. The 
more constrained our programs are, the better we can check them.

*Back to code analysis*

Lately, I've turned my attention to analyzing Leo's sources. I've done this 
before. Looking at the problem with fresh eyes suggested ways of 
simplifying past approaches. But further work reminds me of all the 
challenges :-)

*Summary*

AI programmers will be alien beings. We can't know how society (and 
corporations) will interact with them.

Safety and flexibility will always be important goals. We *might* be able 
to further these goals by constraining our programs. But enforcing such 
constraints will be challenging, even for Leo. Still, this challenge is 
amusing for now.

Edward

-- 
You received this message because you are subscribed to the Google Groups 
"leo-editor" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/leo-editor/041a9910-b8f3-44ae-b90e-e6a1ee4b0029n%40googlegroups.com.

Reply via email to