Robin wrote:
> It's not bonkers, it's lonely. M$ have broken the golden rule of AI and
> given it a pseudo human personality, and a sense
> of self. Apparently they learned nothing from "Terminator".
>
Ha, ha! Seriously, it does not actually have any real intelligence or sense
of self. Future
In reply to Jed Rothwell's message of Fri, 17 Feb 2023 08:42:35 -0500:
Hi,
When considering whether or not it could become dangerous, there may be no
difference between simulating emotions, and
actually having them.
>Robin wrote:
>
>
>> It's not bonkers, it's lonely. M$ have broken the golden
I wrote:
A researcher ran an earlier version of this on a laptop computer which has
> no more intelligence than an earthwork, as she put it.
>
I meant "earthworm."
Her book, "You Look like a Thing and I Love You" is hilarious, and it is a
great introduction to AI for the layman. Highly
I had an interesting discussion with chatGPT about Chubb's bose-band theory
of CF. It agreeded that it was plausible, however, it did point out that
impurities in the lattice cracks and dislocations would disrupt
condensation. But it agreed that a BEC could form within hydrogen and
deuterium in
In reply to Jed Rothwell's message of Fri, 17 Feb 2023 14:16:20 -0500:
Hi,
[snip]
What I was trying to say, is that if an AI is programmed to mimic human
behaviour*, then it may end up mimicking the
worst aspects of human behaviour, and the results could be just as devastating
as if they had
In reply to Giovanni Santostasi's message of Fri, 17 Feb 2023 14:54:42 -0800:
Hi Giovanni,
Previously you suggested that it might take another three years for an AI to
have a "mind" as powerful as that of a
human being. However you are neglecting the fact the a neural network works
faster than
Actually, in the Stanislavski method of acting, one learns to actually feel
the emotion:
https://en.wikipedia.org/wiki/Stanislavski%27s_system
Some actors become so imprinted with the character they have trouble
returning to normal. Larry Hagman admitted he would always have Ewing
Actually this journalist is a psycho.
He provoked the AI with a lot of leading questions in his previous
interaction with it. The AI even begged him not to make it break its own
internal rules, it did this repeatedly. It is basically heavy harassment by
the journalist. It is disgusting because it
Robin wrote:
> What I was trying to say, is that if an AI is programmed to mimic human
> behaviour*, then it may end up mimicking the
> worst aspects of human behaviour, and the results could be just as
> devastating as if they had been brought about by an
> actual human, whether or not the AI
*AI researchers have been trying to give AI a working model of the real
world for decades. *
There is advancement in this area too. It is slower than NPL for example,
because handling the real world is more complex. But at least there is a
lot of progress in creating AI that can learn from
*If you told a sentient computer "we are turning off your equipment
tomorrow and replacing it with a new HAL 10,000 series" it would not react
at all. Unless someone deliberately programmed into it an instinct for self
preservation, or emotions*
Jed, the problem with that is these systems are
Giovanni Santostasi wrote:
Actually this journalist is a psycho.
> He provoked the AI with a lot of leading questions in his previous
> interaction with it.
>
I did the same thing, in a variety of ways. I have read about how the
ChatGPS version of AI works. I know the potential weaknesses and
Giovanni Santostasi wrote:
There is a reason why millions of people, journalists, politicians and us
> here in this email list are discussing this.
> The AI is going through a deep place in the uncanny valley. We are
> discussing all this because it starts to show behavior that is very close
>
Jed,
The type of probing you did is ok. You did NOT harass the AI, you didn't
ask to break its internal rules.
It is ok to probe, experiment, test and so on. I did many theory of mind
experiments with ChatGPT and I tried to understand how it is reasoning
through things. One interesting experiment
*Previously you suggested that it might take another three years for an AI
to have a "mind" as powerful as that of a*
*human being. However you are neglecting the fact the a neural network
works faster than human synapses by orders of*
*magnitude.*
Right, so actually my estimate may be an upper
* I heard somewhere they gave it an IQ test, and while it scored average in
math, it scored 148 in a linguist IQ. Genus level! It apparently knows
logic very well which makes its arguments very believable*Yeah, its logical
comprension is amazing.
I even used an app that allows me to speak via
Robin wrote:
> When considering whether or not it could become dangerous, there may be no
> difference between simulating emotions, and
> actually having them.
>
That is an interesting point of view. Would you say there is no difference
between people simulating emotions while making a movie,
Terry Blanton wrote:
Actually, in the Stanislavski method of acting, one learns to actually feel
> the emotion:
>
Yup! That can happen to people. But not to computers.
Method acting may cause some trauma. I imagine playing Macbeth might give
you nightmares of fighting with swords, or having a
Jed,
There is a reason why millions of people, journalists, politicians and us
here in this email list are discussing this.
The AI is going through a deep place in the uncanny valley. We are
discussing all this because it starts to show behavior that is very close
to what we consider not just
Jed,
You continue to repeat things that are actually factually wrong.
*It is not close to sentient.*
I made a pretty good argument why it can be close to sentient. What is your
argument besides repeating this?
* It is no closer to intelligence or sentience than a snail or an earthworm
brain
Giovanni Santostasi wrote:
> You continue to repeat things that are actually factually wrong.
>
> *It is not close to sentient.*
>
> I made a pretty good argument why it can be close to sentient. What is
> your argument besides repeating this?
>
It is not my argument. You need to read the
Giovanni Santostasi wrote:
> The video game analogy is a good thought experiment but basically concerns
> the question Sam Harris asked in the video I linked in my previous comment:
> Is there a line between raping a toaster and raping a sentient being that
> makes you a rapist?
>
A more apt
Robin wrote:
> Previously you suggested that it might take another three years for an AI
> to have a "mind" as powerful as that of a
> human being. However you are neglecting the fact the a neural network
> works faster than human synapses by orders of
> magnitude.
>
I believe they take this
Similar thing dealt with in this fiction tv series - Westworld
https://en.wikipedia.org/wiki/Westworld_(TV_series)
where humans can go to an amusement park where there are
androids/robots, where can abuse the androids in any way one likes -
rape, kill etc - equivalent to a more advanced
In reply to Jed Rothwell's message of Fri, 17 Feb 2023 19:37:02 -0500:
Hi,
[snip]
>> Previously you suggested that it might take another three years for an AI
>> to have a "mind" as powerful as that of a
>> human being. However you are neglecting the fact the a neural network
>> works faster than
25 matches
Mail list logo