LLM do not have intrinsic short or modifiable long term memory. Both require supplemental systems - reprompting of recent history or expensive offline fine tuning or even more expensive retraining.

I think it’s fair to say no AGI until those are designed in, particularly the ability to actually learn from experience.

Sent from my iPhone

On Apr 10, 2023, at 9:34 AM, Jed Rothwell <jedrothw...@gmail.com> wrote:



As I said earlier, it may not make any difference whether an AI feels/thinks as we do, or just mimics the process.

That is certainly true.


As you pointed out, the AI has no concept of the real world, so it's not going to care whether it's shooting people up
in a video game, or using a robot with a real machine  gun in the real world.

I hope that an advanced AGI will have a concept of the real world, and it will know the difference. I do not think that the word "care" applies here, but if we tell it not to use a machine gun in the real world, I expect it will follow orders. Because that's what computers do. Of course, if someone programs it to use a machine gun in the real world, it would do that too!

I hope we can devise something like Asamov's laws at the core of the operating system to prevent people from programming things like that. I do not if that is possible.


It may be "just a tool", but the more capable we make it the greater the chances that something unforeseen will go
wrong, especially if it has the ability to connect with other AIs over the Internet, because this adds exponentially to
the complexity, and hence our ability to predict what will happen decreases proportionately.

I am not sure I agree. There are many analog processes that we do not fully understand. They sometimes go catastrophically wrong. For example, water gets into coal and causes explosions in coal fired generators. Food is contaminated despite our best efforts to prevent that. Contamination is a complex process that we do not fully understand or control, although of course we know a lot about it. It seems to me that as AI becomes more capable it may become easier to understand, and more transparent. If it is engineered right, the AI will be able to explain its actions to us in ways that transcend complexity and give us the gist of the situation. For example, I use the Delphi 10.4 compiler for Pascal and C++. It has some AI built into it, for the Refactoring and some other features. It is enormously complex compared to compilers from decades ago. It has hundreds of canned procedures and functions. Despite this complexity, it is easier for me to see what it is doing than it was in the past, because it has extensive debugging facilities. You can stop execution and look at variables and internal states in ways that would have been impossible in the past. You can install add-ons that monitor for things like memory leaks. With refactoring and other features you can ask it to look for code that may cause problems. I don't mean code that does not compile, or warning signs such as variables that are never used. It has been able to do that for a long time. I mean more subtle errors.

I think it also gives helpful hints for upgrading legacy code to modern standards, but I have not explored that feature. The point is, increased complexity gives me more control and more understanding of what it is doing, not less.

Reply via email to