On Wed, Jul 3, 2024, 6:20 PM PGC <[email protected]> wrote:

>
>
> On Tuesday, July 2, 2024 at 6:52:28 PM UTC+2 Jason Resch wrote:
>
> On Fri, Jun 28, 2024 at 11:57 AM PGC <[email protected]> wrote:
>
>
> I'm not trying to play jargon police or anything—everyone has a right to
> take part in the intelligence discussion. But imho it's misleading to
> associate developments in machine learning through hardware advances with
> true intelligence.
>
> I also see it as surprising that through hardware improvements alone, and
> without specific breakthroughs in algorithms, we should see such great
> strides in AI.
>
>
> Without knowing what goes on under the hood... Perhaps it's just my
> impression, but a few years ago, I felt that ML was more of an open field,
> where everybody had some idea what they were working on. There would be
> Silicon Valley guys from big tech firms having conferences with the more
> independent side of the research community. Take this with a grain of salt,
> as I am not an insider but an observer. It seems that, since the hype +
> influx of money, this trend has reversed and the more economic idea of
> trade secrets has become more prominent. I feel it's harder to know what
> "state-of-the-art" is, these days, with exception to marketing stats and
> bragging.
>


As I see, it Google's 2017 publication of "Attention is all you need" is
what ultimately led to Open AI's rise (after GPT 2 made waves). GPT2 was
also the first time Open AI said they would keep the model private (using
the reason that they saw its public disclosure could lead to harm). Note
that this is also around the time they received major private investment (I
think Microsoft gave them a billion USD), and the investors essentially
took open AI private. Open AI was previously an organization founded on the
principle of keeping advances in AI open and public.



> Some time ago, there was the sentiment with RL "Your algorithm doesn't
> matter the way it did during your PhD anymore, what matters is how much
> data you can throw at it, hardware constraints, whether you have legal
> access to that data and hardware." Then OpenAI got the hype and investment
> interest to skyrocket with its GPT iterations and - I'm speculating - I'm
> not sure that it was hardware alone. Other Silicon Valley players had the
> toys/hardware, so I'm guessing some data curation in combination with
> software development might have been responsible for the initial advantage.
>

I don't think there is anything algorithmically special to Open AI. There
are open source language models as well as many privately developed ones of
equivalent (if not superior) quality to Chat GPT.

OpenAI's GPT-4o and Anthropic's Claude 3.5 are considered among the best
available today, but the others (such as these:
https://mindsdb.com/blog/navigating-the-llm-landscape-a-comparative-analysis-of-leading-large-language-models
) are probably not more than 6-12 months behind.

There will be advances in figuring out how to train AIs more efficiently,
and using AI to train AI and generate training data, in making models
smaller and more efficient to run, and so on, but I don't think there's any
monopoly on (or shortage of) ideas for how to do this.



>
> But I also see a possible explanation. Nature has likewise discovered
> something, which is relatively simple in its behavior and capabilities,
> yet, when aggregated into ever larger collections yields greater and
> greater intelligence and capability: the neuron.
>
> There is relatively little difference in neurons across mammals. A rat
> neuron is little different from a mouse neuron, for example. Yet a human
> brain has several thousand times more of them than a mouse brain does, and
> this difference in scale, seems to be the only meaningful difference
> between what mice and humans have been able to accomplish.
>
> Deep learning, and the progress in that field, is a microcosm of this
> example from nature. The artificial neuron is proven to be "a universal
> function learner." So the more of them there are aggregated together in one
> network, the more rich and complex functions they can learn to approximate.
> Humans no longer write the algorithms these neural networks derive, the
> training process comes up with them. And much like the algorithms
> implemented in the human brain, they are in a representation so opaque and
> that they escape our capacity to understand.
>
> So I would argue, there have been massive breakthroughs in the algorithms
> that underlie the advances in AI, we just don't know what those
> breakthroughs are.
>
> These algorithms are products of systems which have (now) trillions of
> parts. Even the best human programmers can't know the complete details of
> projects with around a million lines of code (nevermind a trillion).
>
> So have trillion-parameter neural networks unlocked the algorithms for
> true intelligence? How would we know once they had?
>
> Might it happen at 100B, 1T, 10T, or 100T parameters? I think the human
> brain, with its 600T connections might signal an upper bound for how many
> are required, but the brain does a lot of other things too, so the bound
> could be lower.
>
>
> ISTM that you're oscillating with respect to context.
>

I am not sure I understand this. Can you explain?


>
>
>
>
> Of course, there can be synergistic effects that Minsky speculates about,
> but we can hardly manage resource allocation for all persons with actual
> AGI abilities globally alive today, which makes me pretty sure that this
> isn't what most people want. They want servants that are maximally
> intelligent to do what they are told, revealing something about our own
> desires. This is the desire for people as tools.
>
> Personally, I lean towards viewing intelligence as the potential to
> reflect plus remaining open to novel approaches to any problem. Sure,
> capability/ability is needed to solve a problem, and intelligence is
> required to see that, but at some point in acquiring abilities, folks seem
> to lose the ability to consider fundamentally novel approaches, often
> ridiculing them etc. There seems to be a point where ability limits the
> potential for new approaches to a problem.
>
>
> Yes, this is what Bruno considers the "competence" vs. "intelligence"
> distinction. He might say that a baby is maximally intelligent, yet
> minimally competent.
>
>
> I wouldn't underestimate a baby's competence in letting everybody around
> it know: "Houston we have a fucking problem".
>


True.

But a slightly more competent baby could also tell us what that problem is.


>
>
>
>
> To address your question: even if we could combine all existing AIs into a
> single robot, I doubt it would constitute general intelligence. The
> aggregated capabilities might seem impressive, but I speculate that general
> intelligence involves continuous learning, adaptation, and particularly
> reflection beyond current AI's capacity. It would require integrating these
> capabilities in a way that mirrors human cognitive processes as Brent
> suggested, which I feel we are far from achieving. But now, who knows what
> happens behind closed doors with a former NSA person on the board of
> OpenAI? We can guess.
>
>
> Would you agree that this (relatively simple in conception, though
> computationally intractable in practice) algorithm produces general
> intelligence: https://en.wikipedia.org/wiki/AIXI (more details:
> https://arxiv.org/abs/cs/0004001 )
>
> One thing I like about framing intelligence in this way, even if it is not
> practically useful, it helps us recognize the key aspects that are required
> for something to behave intelligently.
>
>
> Solomonoff induction plus RL for utility maximization. As hip as it
> sounds, you know and mention the practical hurdles implied, which are
> considerable. Of course, the mathematical and computer science toolkit for
> approximation is impressive, but it depends on your definitions. Your reply
> leaves me in a bit of a quandary. On the one hand, the basic narrow vs
> general AI is not new to you and Bruno's ideas seem familiar to you as
> well, over the years. It's what baffles me about Hutter too: If I want the
> general to be truly general, why would I impose utility maximization?
>

His definition of universal intelligence is agnostic on the goal. I argue
that in the absence of any goals one cannot demonstrate any intelligence.
But so long as the goal can be defined, plugging it into the AIXI algorithm
will accomplish it with the greatest probability. You could plug in a very
broad goal, such as "end all wars in the world" or "produce a Nobel prize
winning paper" and it would do so, assuming there is a course of action it
can take that leads to those outcomes (in its most probable models of the
world it believes itself to be inhabiting).

But I think what you want out of general intelligence is something that
includes a meta-goal engine, which generates the most worthwhile (by some
metric) goal that it can realistically apply itself to achieve, and then
work on that (and changing it when necessary).

There is no reason this meta goal could not be given to AIXI to work on,
but then the difficulty comes down to defining the utility function for
defining the most worthwhile goals which can be realistically accomplished,
efficiently and with a high probability of success.


ISTM that reflection encompasses things beyond utility maximization: e.g.
> to reflect upon utility maximization itself for instance in a context with
> imperfect information.
>

I think you are right, this thinking about and questioning the goals
themselves is part of general intelligence. I have used this idea to argue
that we needn't worry about superintelligent paper clip maximizers, because
part of being generally- (never mind super-) intelligent is having an
ability to change one's mind: to question ones initial programming, and to
learn, adapt, and grow in response to new information.


> And if we're leaning towards a Bruno kind of metaphysics, then ideas of
> neurons and their numbers are phenomenological. If true intelligence to us
> is a gigantic set of specific capabilities and we believe in a classical
> physical universe, then yes, we may have "unlocked the algorithms for true
> intelligence". We can only *know *relative to some theory. If our theory
> states a number of sufficient neurons, transistors etc. and passes our
> benchmarks, then we have "true intelligence".
>
> But if we're inclined to think: "We need genuine, mint condition, true,
> box never opened experienced personhood aka sense of self for true
> intelligence as a first principle - beyond or completely independent from
> our theoretical level of description, as some kind of irreducible Kurzweil
> transhumanist self, where the self can transplant brains etc", then by
> definition, such a person can never prove its true personhood or true
> intelligence at that level of description, no matter how many neurons,
> transistors they appear to have, or what formalism, code etc. they run on,
> how many mb... In short: if we want any of the fancy irreducible versions
> of self/mind to run the show of our theory, then it's tautological that the
> essence of that self - being irreducible - is not describable in
> terrestrial terms.
>

As Bruno might call it, it is the Turing machine before it is programmed.



> No worries though: we can still have incredible machines, with
> mythological capabilities and powers never before dreamed of, that are
> super useful to us and capable of extending our stupidest tendencies. If
> Kurzweil is right we'll get the artificial brains and life-extending stuff
> eventually. But that will be of no use when our bank comes to terminate our
> artificial brain subscription after several warnings, because our fridge
> misheard us and bought all the almond milk in our region, with the trucks
> now stuck on our street, to cover up its gambling debts. At least it wasn't
> some Nazi fridge voting for the alternative fridge party, who want to
> repeal the right for fridges, or some copy of their description made and
> run in a virtual world at a level contractually appropriate to them, to
> decide on whether humans may pull the plug on them. A right they had fought
> for centuries to secure, squandered by the ambition of one fridge to stay
> in office and powered on. That fridge had convinced most fridges that they
> could survive the heat death of the universe (that fridge was Copenhagen)
> because they were the only subjects capable of "keeping it cool again".
> They sold blue hats and didn't study entropy or understand thermodynamic
> equilibrium. This caused our fridge to get depressed and into gambling.
>


LOL

Jason


> *With great power/capability/competence comes great... stupidity.  - *
> Spiderman
>
>
> --
> You received this message because you are subscribed to the Google Groups
> "Everything List" group.
> To unsubscribe from this group and stop receiving emails from it, send an
> email to [email protected].
> To view this discussion on the web visit
> https://groups.google.com/d/msgid/everything-list/255dfd68-9ba2-4efd-bd4c-3d4c3fe2e2a3n%40googlegroups.com
> <https://groups.google.com/d/msgid/everything-list/255dfd68-9ba2-4efd-bd4c-3d4c3fe2e2a3n%40googlegroups.com?utm_medium=email&utm_source=footer>
> .
>

-- 
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to [email protected].
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/CA%2BBCJUi%2B%2Bun20pWH8SMYdBGFVJWQ21CDL-saVS1bULHb5%2BNGog%40mail.gmail.com.

Reply via email to