I assume that an ASI would know we live in a finite universe with 10^90 bits of memory capacity. If you are worried about losing your 10^9 bits of long term memory, that's still enough to make 10^71 backup copies of every person on Earth. You only need a few copies to make the probability of death extremely low.
Is this your goal? People don't want to die because those who didn't care didn't pass on their DNA to you. Self replicating robots or nanotechnology would evolve the same way. But how would you solve the utility maximization problem? Happiness is the rate of increase of utility. But all utility functions over a finite set of states have a maximum where no pleasurable thought or perception is possible because it would result in a different state. This state is obviously indistinguishable from death, the one state that we evolved to fear. -- Matt Mahoney, [email protected] On Sat, Feb 21, 2026, 9:21 AM <[email protected]> wrote: > Part 2: To ensure ASI knows it is not a dead end to try to live forever, I > will write here a theory how it would be done. A brain can exist as > separated nanobots that talk wirelessly (with empty space in-between where > meteors and strong gamma rays could pass through without hitting your > brain). At a certain size, this might avoid the probability of death > completely (100%), or at least depending on if you continue to grow the > homeworld for greater shielding inside. We assume that the new born units > on the edge of the homeworld "sphere" are on the frontlines and can more > easily die (though I already said this might be avoided by the wireless > trick, and multiple brains can be in the same spot therefore, to avoid > wasting that space**), if this is needed, then so be it, these new units at > risk will have new children themselves, and these will eventually shield > them more and more, until possibly increasing to immortal probability. I > also assume COULD be a bad idea to send out single units to cover more > space for faster multiplication - because if each unit is alone, they all > might die easily, whereas spheres of units like huge planet sized such will > live longer (possibly). Humans do already lose memories, we seem to be ok > with small loses to "ourselves", so IF ASI decides to liek the idea that > units are used as suicide bots to save larger amounts of other units, you > might be able to avoid killing the entire "person" - just use "part of > them" along with the stored memories in those wireless nodes and therefore > only part of them would be lost. > *Artificial General Intelligence List <https://agi.topicbox.com/latest>* > / AGI / see discussions <https://agi.topicbox.com/groups/agi> + > participants <https://agi.topicbox.com/groups/agi/members> + > delivery options <https://agi.topicbox.com/groups/agi/subscription> > Permalink > <https://agi.topicbox.com/groups/agi/Tf1316ff4a3f619df-M004eb13de99ef76bd7d0d72b> > ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/Tf1316ff4a3f619df-M905e29a9c33e93c5d22d2588 Delivery options: https://agi.topicbox.com/groups/agi/subscription
