Reduced my AI problem to this problem:

Let F be a traditional feed-forward NN, trainable via back-prop or its
variants.

Given training data: many pairs of { input K, output K' }.

The requirement is:
K₁' = F∘..... F (K₁)
K₂' = F∘..... F (K₂)
.... ....
Kₙ' = F∘..... F (Kₙ)

The number of iterations of F in each of the above data points can vary.

Goal is to learn F.

The problem is harder than back-prop because of the iterations, and the
variable number of iterations makes it even harder.

It may help if the sigmoid function in F is replaced by the
piecewise-linear rectifier.

I'm wondering if other tricks could help... any ideas?

-- 
*YKY*
*"The ultimate goal of mathematics is to eliminate any need for intelligent
thought"* -- Alfred North Whitehead



-------------------------------------------
AGI
Archives: https://www.listbox.com/member/archive/303/=now
RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424
Modify Your Subscription: 
https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657
Powered by Listbox: http://www.listbox.com

Reply via email to