I just realized something. Let’s say you do have this dual-sensory sampling with one digital and one analog feeding into a “multi-channel” though integrated hybrid codec, temporally synchronized (temporally lossless), when one sense’s feed is stopped or empty, for example the environment goes dark or audio goes quiet, the codec goes into either full lossy or full lossless mode. Verses when it’s noisy and light out then the codec is full-modal lossy w. lossless (lossylossless). If it’s always noisy during daylight and quiet at night the codec is a flip-flop lossness system, being lossy during the day and lossless at night. And not just in day/night scenarios but in sampling specific frequencies and digital channels... and scanning them... adaptively optimizing for bandwidth and power consumption.
What’s this got to do with AI/AGI? Multi-modal lossness bandwidth optimization is one thing. You can add smarts to it to priority optimize sensory and knowledge feed channels. Channels can be full duplex… or full multiplex… or full digital/analog (lossless/lossy) hybrid multiplex… with probability waves, and probability carrier waves, etc.. All AI’s/AGI’s exist in transmissionary states. ------------------------------------------ Artificial General Intelligence List: AGI Permalink: https://agi.topicbox.com/groups/agi/T5ff6237e11d945fb-M44a338944af3cce192836b3f Delivery options: https://agi.topicbox.com/groups/agi/subscription
