On Tuesday, July 2, 2024 at 6:52:28 PM UTC+2 Jason Resch wrote:

On Fri, Jun 28, 2024 at 11:57 AM PGC <[email protected]> wrote:


I'm not trying to play jargon police or anything—everyone has a right to 
take part in the intelligence discussion. But imho it's misleading to 
associate developments in machine learning through hardware advances with 
true intelligence.

I also see it as surprising that through hardware improvements alone, and 
without specific breakthroughs in algorithms, we should see such great 
strides in AI. 


Without knowing what goes on under the hood... Perhaps it's just my 
impression, but a few years ago, I felt that ML was more of an open field, 
where everybody had some idea what they were working on. There would be 
Silicon Valley guys from big tech firms having conferences with the more 
independent side of the research community. Take this with a grain of salt, 
as I am not an insider but an observer. It seems that, since the hype + 
influx of money, this trend has reversed and the more economic idea of 
trade secrets has become more prominent. I feel it's harder to know what 
"state-of-the-art" is, these days, with exception to marketing stats and 
bragging.

Some time ago, there was the sentiment with RL "Your algorithm doesn't 
matter the way it did during your PhD anymore, what matters is how much 
data you can throw at it, hardware constraints, whether you have legal 
access to that data and hardware." Then OpenAI got the hype and investment 
interest to skyrocket with its GPT iterations and - I'm speculating - I'm 
not sure that it was hardware alone. Other Silicon Valley players had the 
toys/hardware, so I'm guessing some data curation in combination with 
software development might have been responsible for the initial advantage. 
 

But I also see a possible explanation. Nature has likewise discovered 
something, which is relatively simple in its behavior and capabilities, 
yet, when aggregated into ever larger collections yields greater and 
greater intelligence and capability: the neuron.

There is relatively little difference in neurons across mammals. A rat 
neuron is little different from a mouse neuron, for example. Yet a human 
brain has several thousand times more of them than a mouse brain does, and 
this difference in scale, seems to be the only meaningful difference 
between what mice and humans have been able to accomplish.

Deep learning, and the progress in that field, is a microcosm of this 
example from nature. The artificial neuron is proven to be "a universal 
function learner." So the more of them there are aggregated together in one 
network, the more rich and complex functions they can learn to approximate. 
Humans no longer write the algorithms these neural networks derive, the 
training process comes up with them. And much like the algorithms 
implemented in the human brain, they are in a representation so opaque and 
that they escape our capacity to understand.

So I would argue, there have been massive breakthroughs in the algorithms 
that underlie the advances in AI, we just don't know what those 
breakthroughs are.

These algorithms are products of systems which have (now) trillions of 
parts. Even the best human programmers can't know the complete details of 
projects with around a million lines of code (nevermind a trillion).

So have trillion-parameter neural networks unlocked the algorithms for true 
intelligence? How would we know once they had?

Might it happen at 100B, 1T, 10T, or 100T parameters? I think the human 
brain, with its 600T connections might signal an upper bound for how many 
are required, but the brain does a lot of other things too, so the bound 
could be lower.


ISTM that you're oscillating with respect to context.
 


 

Of course, there can be synergistic effects that Minsky speculates about, 
but we can hardly manage resource allocation for all persons with actual 
AGI abilities globally alive today, which makes me pretty sure that this 
isn't what most people want. They want servants that are maximally 
intelligent to do what they are told, revealing something about our own 
desires. This is the desire for people as tools.

Personally, I lean towards viewing intelligence as the potential to reflect 
plus remaining open to novel approaches to any problem. Sure, 
capability/ability is needed to solve a problem, and intelligence is 
required to see that, but at some point in acquiring abilities, folks seem 
to lose the ability to consider fundamentally novel approaches, often 
ridiculing them etc. There seems to be a point where ability limits the 
potential for new approaches to a problem. 


Yes, this is what Bruno considers the "competence" vs. "intelligence" 
distinction. He might say that a baby is maximally intelligent, yet 
minimally competent.


I wouldn't underestimate a baby's competence in letting everybody around it 
know: "Houston we have a fucking problem".
 

 


To address your question: even if we could combine all existing AIs into a 
single robot, I doubt it would constitute general intelligence. The 
aggregated capabilities might seem impressive, but I speculate that general 
intelligence involves continuous learning, adaptation, and particularly 
reflection beyond current AI's capacity. It would require integrating these 
capabilities in a way that mirrors human cognitive processes as Brent 
suggested, which I feel we are far from achieving. But now, who knows what 
happens behind closed doors with a former NSA person on the board of 
OpenAI? We can guess. 


Would you agree that this (relatively simple in conception, though 
computationally intractable in practice) algorithm produces general 
intelligence: https://en.wikipedia.org/wiki/AIXI (more details: 
https://arxiv.org/abs/cs/0004001 )

One thing I like about framing intelligence in this way, even if it is not 
practically useful, it helps us recognize the key aspects that are required 
for something to behave intelligently.


Solomonoff induction plus RL for utility maximization. As hip as it sounds, 
you know and mention the practical hurdles implied, which are considerable. 
Of course, the mathematical and computer science toolkit for approximation 
is impressive, but it depends on your definitions. Your reply leaves me in 
a bit of a quandary. On the one hand, the basic narrow vs general AI is not 
new to you and Bruno's ideas seem familiar to you as well, over the years. 
It's what baffles me about Hutter too: If I want the general to be truly 
general, why would I impose utility maximization? ISTM that reflection 
encompasses things beyond utility maximization: e.g. to reflect upon 
utility maximization itself for instance in a context with imperfect 
information. 

And if we're leaning towards a Bruno kind of metaphysics, then ideas of 
neurons and their numbers are phenomenological. If true intelligence to us 
is a gigantic set of specific capabilities and we believe in a classical 
physical universe, then yes, we may have "unlocked the algorithms for true 
intelligence". We can only *know *relative to some theory. If our theory 
states a number of sufficient neurons, transistors etc. and passes our 
benchmarks, then we have "true intelligence". 

But if we're inclined to think: "We need genuine, mint condition, true, box 
never opened experienced personhood aka sense of self for true intelligence 
as a first principle - beyond or completely independent from our 
theoretical level of description, as some kind of irreducible Kurzweil 
transhumanist self, where the self can transplant brains etc", then by 
definition, such a person can never prove its true personhood or true 
intelligence at that level of description, no matter how many neurons, 
transistors they appear to have, or what formalism, code etc. they run on, 
how many mb... In short: if we want any of the fancy irreducible versions 
of self/mind to run the show of our theory, then it's tautological that the 
essence of that self - being irreducible - is not describable in 
terrestrial terms. 

No worries though: we can still have incredible machines, with mythological 
capabilities and powers never before dreamed of, that are super useful to 
us and capable of extending our stupidest tendencies. If Kurzweil is right 
we'll get the artificial brains and life-extending stuff eventually. But 
that will be of no use when our bank comes to terminate our artificial 
brain subscription after several warnings, because our fridge misheard us 
and bought all the almond milk in our region, with the trucks now stuck on 
our street, to cover up its gambling debts. At least it wasn't some Nazi 
fridge voting for the alternative fridge party, who want to repeal the 
right for fridges, or some copy of their description made and run in a 
virtual world at a level contractually appropriate to them, to decide on 
whether humans may pull the plug on them. A right they had fought for 
centuries to secure, squandered by the ambition of one fridge to stay in 
office and powered on. That fridge had convinced most fridges that they 
could survive the heat death of the universe (that fridge was Copenhagen) 
because they were the only subjects capable of "keeping it cool again". 
They sold blue hats and didn't study entropy or understand thermodynamic 
equilibrium. This caused our fridge to get depressed and into gambling. 

*With great power/capability/competence comes great... stupidity.  - *
Spiderman
 

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/255dfd68-9ba2-4efd-bd4c-3d4c3fe2e2a3n%40googlegroups.com.

Reply via email to