Hi,

Michel Verdier wrote:
> I think they need better intelligence to better use their data.

Actually i think that current AI lacks of disciplined and good willing
reasoning rather than of IQ. 

It is astounding how far ye olde Perceptron made it meanwhile by being
augmented with oddities like non-linear voodoo layers or 8-bit floating
point number arithmetic. Remembering the intelligence tests of my
younger years i'd expect to be beaten by AI on many of their topics.

But as small the space of combinations of xorrisofs arguments is
compared to the size of an AI parameter space, the AIs are still not
able to correcty answer a question like "How to modify a bootable
ISO of distro XYZ ?". (I pick up the debris of their flawed answers
in the internet.)


Joe wrote:
> The problem is that 'AI' is a hoax, there is no 'I'. What we have now is
> ELIZA with about a trillion times as many computer resources, but not a
> single bit more actual intelligence, since we don't know how to make
> that.

We only know one way to make human intelligence and it is quite
similarly obscure as AI training if we consider the ~ 3.5 billion years
of evolution which enabled our mass production of humans. And many
of them will never qualify for what we as computer oriented people are
undisputedly willing to call "intelligence".


> It's a large-scale expert system that hasn't been trained by experts.

Again similar as with humans:
Those who can, do.
Those who cannot, teach.
Those who cannot teach, teach teachers.

I recognize in AI lots of deficiencies which i first saw in journalism
and academic communities. Form beats meaning. Word beats structure.
So what is missing in my view is commitment to "Nullius in Verba".
AI which does not believe in everything that its masters give it to
read.
Of course we have to be aware of D.F.Jones' novel "Colossus" which
stems from the same time as the Perceptron.


Andy Smith wrote:
> If an LLM were asking for help, and I knew it was an LLM, then I would
> also know that any time I spent on responding would be me donating my
> free labour to a huge corporation for it to make money off of

It is my considered decision to allow exploitation of my personal
sport. I'm not really in the situation of https://xkcd.com/2347 but
i've seen my hobby horse in use by organizations which surely would not
have hired me 20 years ago. I don't begrudge them their profit. Making
money is hard work in itself and it leaves own scars on the soul.


> in what is likely to be a highly damaging bubble. [..]
> It is perhaps a little illogical since it doesn't seem to bother me what
> the human would use the knowledge for, while it does bopther me what the
> LLM uses it for.

This is an interesting aspect of free software work in general.
What if somebody does something really evil by help of my stuff ?
Am i responsible ?
Should i scrutinize people for possible bad intentions before giving
them advise ?
(The GPL of inherited code forbids me to impose the demand for being
not evil, even if i could give a convincing definition of what i mean.
But that is a rather lame excuse, ethically. I could get rid of that
code and then start a license crusade.)


> Soon we'll have our own agents offer
> to take away the tedium of doing that by doing it for us.

I'm on the side of the non-evil ones. :))


[email protected] wrote:
> I think the point is that the currently dominant "AI" shops
> aren't about facts. There's not much money in that.

But there will be when the old experts retire and the rich people need
a doctor who knows the job.


> There is "money" (actually potential, speculative money)

A very important point. For now AI is predominantly expensive and
gluttonous.


I wrote:
> > [...] so we get smoothly into the
> > pampered and isolated state of the Spacers in Asimov's novel
> > "The Naked Sun".

[email protected] wrote:
> No, no. As much as I admire Asimov, it's more Harry G. Frankfurt [1]
> here. Sounding truthful is the aim for them, truth is just
> uninteresting.

But it would be tremendously useful and monetarily valuable to have a
simulation of a good willing rational expert. For now AI simulates
highly educated imposters.
I deem it surprising that the art of imposting was so easy to acquire.
Now, if the swindler would discover its love for honest science ...


Have a nice day :)

Thomas

Reply via email to