On Sat, Aug 16, 2014 at 9:44 AM, Pierz <[email protected]> wrote: > On Saturday, August 16, 2014 11:26:08 PM UTC+10, jessem wrote: > > > I think you're being misled by the particular example you chose > involving addition, in general there is no principle that says finding the > appropriate entry in a lookup table involves a computation just as > complicated as the original computation without a lookup table. Suppose > instead of addition, the lookup table is based on a Turing test type > situation where an intelligent AI is asked to respond to textual input, and > the lookup table is created by doing a vast number of runs, all starting > from the same initial state but feeding the AI *all* possible strings of > characters under a certain length (the vast majority will just be nonsense > of course). Then all the possible input strings can be stored > alphabetically, and if I interact with the lookup table by typing a series > of comment to the AI, it just has to search through the recordings > alphabetically to find one where the AI responded to that particular > comment (after responding to my previous comments which constitute the > earlier parts of the input string), it doesn't need to re-compute the AI's > brain processes or anything like that. And ultimately regardless of the > type of program, the "input" will be encoded as some string of 1's and 0's, > so for *all* lookup tables the possible input strings can be stored in > numerical order, analogous to alphabetical order for verbal statement > > No of course, a lookup table can help, as I went on to say a few minutes > later in a different reply when I realized the mistake. But I've explained > in my longer reply to Liz what I was trying to say here. It depends on what > level we wish to simulate to. A mere lookup table of outer behaviours such > as speech acts won't be sufficient for a complete simulation. The more fine > grained and responsive I wish to make my simulation, the more computation > will be required to select the correct recordings, and the shorter and > shallower the recordings will be. But read my reply to Liz. Hopefully I > explain myself better there.
Well, in my example of the Turing test, if the AI was a mind upload, then the output could easily a detailed playback of all the activity in its simulated brain at the synaptic level as it was answering my questions, in addition to the AI's textual output. But it would still just be a *recording* of the brain activity it went through during the original creation of the lookup table, when the upload was simulated responding to every possible input sequence. By "talking" to the lookup table, I don't think I increase the measure of the experiences associated with the upload seeing my side of the dialogue and responding, though the original creation of the lookup table would have increased the measure associated with the all the experiences of seeing all the possible input strings. Note that even though an output showing detailed brain activity is very "fine grained", it isn't true that more computation is "required to select the correct recordings" then if I just got textual output, nor are the recordings "shorter and shallower". Perhaps you were talking about making the *input* more fine-grained? Suppose instead of just interacting with the upload via text, I want to have a virtual puppet body in the upload's simulated world (where the upload has his own simulated body), and I have a system that detects all the nerve signals leaving my brain and transfers them to the simulated motor neurons of the puppet body that the upload sees in front of him, and his physical responses (along with any changes in other physical objects in the virtual world) are translated into the appropriate signals to my sensory neurons, a la The Matrix. So here both the input and output are quite fine-grained. To create the lookup table, someone would have to run a host of simulations in which the puppet body interacting with the upload is fed *all* possible combinations of signals to its motor neurons, the vast majority of which would presumably lead it to flail around randomly, or perhaps be immobilized due to equal numbers of signals arriving at opposing muscle groups. This original work to create the lookup table is obviously computationally intensive, but if I want to later interact with the finished lookup table, finding the right recorded output to feed to my sensory neurons in response to my bodily output should be much less difficult then the original simulation needed to create that recording. The original simulation would require simulating all the physical changes in the virtual world, including the upload's brain activity, moment-by-moment to see how everything reacts to the motor neuron outputs fed to the puppet body. On the other hand, finding the appropriate response to my motor neuron outputs on the lookup table is just a matter of coding my motor neuron outputs as 1's and 0's, then looking up that sequence in a table of sequences of 1's and 0's listed in numerical order, and playing back the recording associated with it. This is more complicated than the original scenario where we just have to find textual inputs, but computationally I think it would still be simpler than actually simulating all the changes in the virtual world moment-by-moment. Jesse -- You received this message because you are subscribed to the Google Groups "Everything List" group. To unsubscribe from this group and stop receiving emails from it, send an email to [email protected]. To post to this group, send email to [email protected]. Visit this group at http://groups.google.com/group/everything-list. For more options, visit https://groups.google.com/d/optout.

