Thanks for your interest :)

The neural network in BERT / GPT is used for predicting masked token in text.

I propose that this can be regarded as a general inference step of logic.

Now I put this neural network into the reinforcement learning
framework, where it represents the transition function (acting
on a state to give the next state).

Under reinforcement learning, each "action" in state space receives
a reward, leading to learning of the best actions to perform at each
state.  Strictly speaking, the "actions" in my system are not really
actions, just "thought actions" leading from one thought vector to
the next.

I may use GPT in an AGI prototype, as GPT is a particularly efficient
architecture.

If you also plan to use deep learning in AGI, we could discuss more :)

Also the Curry-Howard correspondence provides a link between the
logic-based approach and the program-based approach (such as
proposed by Ben Goertzel in his latest general AGI theory).  Programs
are just a more general class of computations, that can be regarded as
some kind of logics (such as λ-calculus).  This perspective unifies the
two approaches, so researchers can have a clearer view of the space
of possibilities:  they could more easily realize when they are just
re-inventing the wheel, or have some representations that are
genuinely new.

On 8/12/21, [email protected]
<[email protected]> wrote:
> Can you clarify what you are doing? Are you predicting answers (solving
> logic), are you adding reward to GPT? What do you think you are doing? Will
> you use GPT? What do you expect to do to or with GPT?

------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/Tb5526c8a9151713b-M7a1ec1c7a41e5306906c4852
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to