That's a great point. To be honest, anyone who is accurately mimicked by
a bot should be just fine with that mimicry, leveraging the word
"accurate", of course. I mean, isn't that a sci-fi plot? Your bot
responds to things so that you don't have to.
A friend of mine recently objected that
Steve Smith mentioned the Senate hearing about regulating LLMs. During the
hearing someone mentioned (sort of in passing) that it would make sense to
release such systems in stages: first to a small group of people, then to a
larger group, etc. That reminded me of the standard approach to drug
I don’t really get it. Trump can go on a TV town hall and lie, and those folks
just lap it up. Sue a company for learning some fancy patterns? Really? If
someone made a generative model of, say, Glen’s visual appearance and vocal
mannerisms and gave him a shtick that didn’t match up with
Jochen -
Very interesting framing... as a followup I took the converse
(inverse?) question To GPT4..
/If we consider an LLM (Large Language Model) as the Sancho Panza to
the Don Quixote of its human users, we can explore a couple of
potential aspects:/
1.
/Grounding and
I have asked Bard ( bard.google.com) today about Don Quixote from Cervantes,
and if a large language model would be similar to a Don Quixote without a
Sancho Panza.Here is what Bard replied:"In a way, large language models can be
seen as Don Quixotes without Sancho Panzas. They are trained on
Thanks, Steve! I enjoy these slices of history and peeking into the
discussions of the time on fundamental issues.
Owen, were you involved with the Interscript project mentioned near the end
of the story? Interscript being the scripting of dynamic Interpress
documents which later spun out to
https://medium.com/chmcore/a-backup-of-historical-proportions-93f5f502f608
Interesting article on recovery of a huge cache of Xerox PARC archives
which also references more than a little bit of both DEC and Adobe history.
As I watch the live questioning of Altman on AI in Congress...