I found that link AFTER I wrote the above text btw.

notice it says

"The biggest puzzle in this field is the question of how the cell collective 
knows what to build and when to stop."

Based on context and hardcoded desires, it grows its way forward.

"While we know of many genes that are *required* for the process of 
regeneration, we still do not know the algorithm that is *sufficient* for cells 
to know how to build or remodel complex organs to a very specific anatomical 
end-goal"

“build an eye here”

"Imagine if we could design systems of the same plasticity and robustness as 
biological life: structures and machines that could grow and repair themselves."

"We will focus on Cellular Automata models as a roadmap for the effort of 
identifying cell-level rules which give rise to complex, regenerative behavior 
of the collective. CAs typically consist of a grid of cells being iteratively 
updated, with the same set of rules being applied to each cell at every step. 
The new state of a cell depends only on the states of the few cells in its 
immediate neighborhood. Despite their apparent simplicity, CAs often 
demonstrate rich, interesting behaviours, and have a long history of being 
applied to modeling biological phenomena."

"Typical cellular automata update all cells simultaneously. This implies the 
existence of a global clock, synchronizing all cells. Relying on global 
synchronisation is not something one expects from a self-organising system. We 
relax this requirement by assuming that each cell performs an update 
independently, waiting for a random time interval between updates"

Both local, and global shape of the context (what, and where (position)) affect 
the prediction.

"We can see that different training runs can lead to models with drastically 
different long term behaviours. Some tend to die out, some don’t seem to know 
how to stop growing, but some happen to be almost stable! How can we steer the 
training towards producing persistent patterns all the time?"

Sounds like GPT-2. When to finish a discovery sentence. Keep on topic until 
reach goal.

"we wanted the system to evolve from the seed pattern to the target pattern - a 
trajectory which we achieved in Experiment 1. Now, we want to avoid the 
instability we observed - which in our dynamical system metaphor consists of 
making the target pattern an attractor."

"Intuitively we claim that with longer time intervals and several applications 
of loss, the model is more likely to create an attractor for the target shape, 
as we iteratively mold the dynamics to return to the target pattern from 
wherever the system has decided to venture. However, longer time periods 
substantially increase the training time and more importantly, the memory 
requirements, given that the entire episode’s intermediate activations must be 
stored in memory for a backwards-pass to occur."

That sounds like Hutter Prize compressor improvement. Takes more RAM, takes 
longer, for better regeneration to target from Nothing (seed, compressed state).

"it’s been found that the target morphology is not hard coded by the DNA, but 
is maintained by a physiological circuit that stores a setpoint for this 
anatomical homeostasis"

We want to regenerate shape (sort the words/articles), and grow the 
organism/sentence as well. But avoid non-stop growth past the matured rest 
state goal and stop de-generation.
------------------------------------------
Artificial General Intelligence List: AGI
Permalink: 
https://agi.topicbox.com/groups/agi/T2a0cd9d392f9ff94-M4657f0cfa0235e6ffcebe8e7
Delivery options: https://agi.topicbox.com/groups/agi/subscription

Reply via email to