On Tue, Apr 23, 2024 at 10:10 PM Bruce Kellett <[email protected]> wrote:
*>> Two things determine what LLAMA3 or any other AI will do. * >> *1) The machine's environment, which in this case is the prompt which can >> be written text, audio, a picture, or a video. * >> *2) The way the neural network of the machine is wired up, which is >> determined by a huge matrix of numbers that nobody understands.* >> > > *> "Just because no one understands the way this is wired up does not mean > that it is the same as a human brain."* > *I certainly don't believe there is one and only one way a human brain can be wired up, if there was then we'd all be the same and we're not, some humans are geniuses and some are imbeciles. And nobody has anything other than a hazy coarse grained understanding about how modern Large Language Models are wired up, but we do know a few things about them:* *1) However modern neural networks are wired up they end up working at least as well as how the average human's biological brain is wired up.* *2) The way LLMs are wired up is changing and improving at an exponential rate. Closed source LLM GPT-3.5, which astonished everybody when it was introduced about a year ago, had 175 billion parameters. Open source LLAMA-3, which was introduced only a few days ago, has only 70 billion parameters but its answers are better than GPT3.5 and almost as good as GPT-4 with its1.8 trillion parameters. And because it's so much smaller you need less hardware and energy to run LLAMA-3 than GPT-3.5 , and vastly less than GPT-4. * > * > "**That it has lots of parameters that are numbers is not the same >>> as having lots of values."* >> >> >> *Why not? How would the machine behave differently if having lots of >> parameters WERE the same as having lots of values?* >> > > *> "That is not the question."* > *I don't know what "the question" is but I know what MY question was and I think it was crystal clear, and yet I still have not received an answer to it. * *> "If the machine behaves exactly as a human in terms of following a value > set, then you will, by definition, see no difference. But in saying this > you are assuming that the AI can in fact behave in this way, and that is > just to assume the answer to the original question. Which was: Can the AI > act according to human type values."* > *I don't need to assume anything, I know it is a fact because way back in the very distant past, a full year ago, a computer was able to pass the Turing Test. These days if a modern LLM wanted to deceive a human into thinking he was talking to another biological person it would have to pretend to be more stupid and ignorant then it really was and was thinking more slowly than it really could. Yes, LLMs can still occasionally say stupid things, but 95% of human college graduates cannot correctly explain what causes the seasons, most said it's because the Earth is closer to the sun in the summer than in the winter, but in the northern hemisphere exactly the opposite is true. And Harvard graduates are not immune from this misconception. * Harvard Graduates Explain Seasons <https://www.youtube.com/watch?v=JXb7Oq13pjQ> John K Clark See what's on my new list at Extropolis <https://groups.google.com/g/extropolis> mlb -- You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To view this discussion on the web visit https://groups.google.com/d/msgid/everything-list/CAJPayv2mShFu30ZGa9%2BudKjqytPYQuL1JnEhSieEsSLOnk_B4w%40mail.gmail.com.

