I tried more tests with davinci original GPT-3 just now, meh, it can be hard to
impress I guess, it gave a great response (first attempt) to one of my tests
look. My prompt ends at the 3rd quote (opening quote):
"I was walking into a store and bought an apple. I walked out onto the road and
In comparison to my posted test of the new edit GPT-3 above, I tried older
gpt3s.hmm they can do the edit thing too, maybe only slightly worse due to
not having double sided context, I didn't analyze my tests below much yet but
looks like it:
https://justpaste.it/86nnv
1 more set of tests: https://justpaste.it/45ksq
--
Artificial General Intelligence List: AGI
Permalink:
https://agi.topicbox.com/groups/agi/Ta3ab021cb12b517c-M30bbb559afd9ed7224739342
Delivery options: https://agi.topicbox.com/groups/agi/subscription
Jean-Paul,
On Tue, Mar 15, 2022 at 1:42 PM Jean-Paul VanBelle via AGI <
agi@agi.topicbox.com> wrote:
> Strange that you didn't reference Schank and conceptual dependency theory
> (1975) which appeared to be quite successful at representing huge amounts
> of human knowledge with a very small
My, god, great god glory! OpenAI's new release is brain droping! Below is 2
tests I ran in their playground, I show it evolving the context. In the first
test, the input is written by me, and then I feed it in, then I feed in the
output, repeat. In run two, same, each is separated by 2 newlines
The search for a scientific solution to ambiguity? I think this might be a
boundary issue.
On 14 Mar 2022 20:39, "Rob Freeman" wrote:
> In my presentation at AGI-21 last year I argued that semantic primitives
> could not be found. That in fact "meaning", most evidently by the
> historical best