Indeed, and as I pointed out, it's not all that difficult to debunk the idea that it understands anything at all by asking simple questions that are not included in its database. You can test chatGPT just like you can test students who you suspect have cheated at exams. You invite them for clarification in your office, and let them do some problems in front of you on the blackboard. If those questions are simpler than the exam problems and the student cannot do those, then that's a red flag.

Similarly, as discussed here, chatGPT was able to give the derivation of the moment of inertia of a sphere, but was unable to derive this in a much simpler way by invoking spherical symmetry even when given lots of hints. All it could do was write down the original derivation again and the argue that the moment of inertia is the same for all axes, and that the result is spherically symmetric, But it couldn't derive the expression for the moment of inertia by making use of that (adding up the momenta of inertia in 3 orthogonal directions yields a spherically symmetric integral that's much easier to compute). The reason why it can't do this is because it's not in its database.

And there are quite few of such cases where there the widely published solution is significantly more complex than another solution which isn't widely published and may not be in chatGPT's database. For example:

Derive that the flux of isotropic radiation incident on an area is 1/4 u c where u is the energy density and c the speed of light.

Standard solution: The part of the flux coming from a solid angle range dOmega is u c cos(theta) dOmega/(4 pi) where theta is the angle with the normal of the surface. Writing dOmega as sin(theta) dtheta dphi and integrating over the half-sphere from which the radiation can reach the area, yields:

Flux = u c/(4 pi)Integral over phi from 0 o 2 pi dphi Integral over theta from 0 to pi/2 of sin(theta) cos(theta) d theta =1/4 u c

chatGPT will probably have no problems blurting this out, because this can be found in almost all sources.

But the fact that radiation is isotropic should be something that we could exploit to simplify this derivation. That's indeed possible. The reason why we couldn't in the above derivation was because we let the area be a small straight area that broke spherical symmetry. So let's fix that:

Much simpler derivation: Consider a small sphere of radius r inside a cavity filled with isotropic radiation. The amount of radiation intercepted from a solid angle range dOmega around any direction is then u c pi r^2 dOmega/(4 pi), because the radiation is intercepted by the cross section of the sphere in the direction orthogonal from where the radiation is coming and hat's always pi r^2. Because this doesn't depend on the direction the radiation is coming from, integrating over the solid angle is now trivial, this yields u c pi r^2. The flux intercepted by an area element on the sphere is then obtained by dividing this by the area 4 pi r^2 of the sphere which is therefore 1/4 u c. And if that's the flux incident on an area element of a sphere, it is also the flux though it if the rest of the sphere wouldn't be there.

chatGPT probably won't be able to present this much simpler derivation regardless of how many hints you give it.


Saibal







On 22-05-2023 23:56, Terren Suydam wrote:
Many, myself included, are captivated by the amazing capabilities of
chatGPT and other LLMs. They are, truly, incredible. Depending on your
definition of Turing Test, it passes with flying colors in many, many
contexts. It would take a much stricter Turing Test than we might have
imagined this time last year, before we could confidently say that
we're not talking to a human. One way to improve chatGPT's performance
on an actual Turing Test would be to slow it down, because it is too
fast to be human.

All that said, is chatGPT actually intelligent?  There's no question
that it behaves in a way that we would all agree is intelligent. The
answers it gives, and the speed it gives them in, reflect an
intelligence that often far exceeds most if not all humans.

I know some here say intelligence is as intelligence does. Full stop,
conversation over. ChatGPT is intelligent, because it acts
intelligently.

But this is an oversimplified view!  The reason it's over-simple is
that it ignores what the source of the intelligence is. The source of
the intelligence is in the texts it's trained on. If ChatGPT was
trained on gibberish, that's what you'd get out of it. It is amazingly
similar to the Chinese Room thought experiment proposed by John
Searle. It is manipulating symbols without having any understanding of
what those symbols are. As a result, it does not and can not know if
what it's saying is correct or not. This is a well known caveat of
using LLMs.

ChatGPT, therefore, is more like a search engine that can extract the
intelligence that is already structured within the data it's trained
on. Think of it as a semantic google. It's a huge achievement in the
sense that training on the data in the way it does, it encodes the
_context_ that words appear in with sufficiently high resolution that
it's usually indistinguishable from humans who actually understand
context in a way that's _grounded in experience_. LLMs don't
experience anything. They are feed-forward machines. The algorithms
that implement chatGPT are useless without enormous amounts of text
that expresses actual intelligence.

Cal Newport does a good job of explaining this here [1].

Terren

 --
You received this message because you are subscribed to the Google
Groups "Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send
an email to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit
https://groups.google.com/d/msgid/everything-list/CAMy3ZA_mmt0jKwGAGWrZo%2BQc%3DgcEq-o0jMd%3DCGEiJA_4cN6B6g%40mail.gmail.com
[2].


Links:
------
[1]
https://www.newyorker.com/science/annals-of-artificial-intelligence/what-kind-of-mind-does-chatgpt-have
[2]
https://groups.google.com/d/msgid/everything-list/CAMy3ZA_mmt0jKwGAGWrZo%2BQc%3DgcEq-o0jMd%3DCGEiJA_4cN6B6g%40mail.gmail.com?utm_medium=email&utm_source=footer

--
You received this message because you are subscribed to the Google Groups 
"Everything List" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to everything-list+unsubscr...@googlegroups.com.
To view this discussion on the web visit 
https://groups.google.com/d/msgid/everything-list/847f7d8793d436ae439a100d846875bc%40zonnet.nl.

Reply via email to