On Thu, Dec 26, 2013 at 12:52 PM, Aaron Hosford <hosfor...@gmail.com> wrote: There are other ways to be creative, but they all, to my knowledge, depend on this idea of filling in the gaps of a counterfactual hypothesis with reasonable assumptions based on previous observations. For example, I like to take two completely unrelated systems, make an arbitrary analogy between them, and then see what comes out. --------------------------
Not all of the different aspects of creativity 'depend on filling in the gaps of a counterfactual hypothesis with reasonable assumptions based on previous observations,' - even if something like that is necessary for making it understandable. The reasonable assumptions necessary for understanding might be based on unreasonable assumptions. Like it makes perfect sense to think of Pinocchio existing in a medieval world that was imagined and rendered for a modern day cartoon. The humor (and pathos) makes sense to us even though it includes meta-references that are not based on reasonable assumptions. Seeing Walt Disney talking to Mickey Mouse is a more reasonable association even though it too is fantasy of course. Creativity combines fantasy with objects of thought (derived from observation) that need to be well grounded to make the creative work understandable. I realize that is what Aaron meant but he did not quite say that. The fact that Aaron started out with two unrelated *systems*, in his meta-example is a significant issue in my opinion. The systems are not necessarily built only on reasonable assumptions. Therefore his claim must be relativistic. He is saying that the gaps in the synthesis must be based on finding relatively reasonable assumptions. Jim Bromer On Thu, Dec 26, 2013 at 12:52 PM, Aaron Hosford <hosfor...@gmail.com> wrote: > > It's like this: You have an associative memory designed to fill in missing > contextual details. It basically works by identifying rules and correlations > in the inputs, and then extrapolating those and applying them to future > inputs to fill in the gaps. Now you throw in an input that is either an > observed input that has been mutated randomly, or a randomly generated input, > and see how the associative memory fills in the gaps for the randomized input. > > Let's look at an example, so we can get away from the vague and generic > language. Suppose you know all about cars from real world experience, and so > you have learned that they have four wheels placed near the corners, that > there is a front and a back, that there is a driver, that there are windows > and mirrors so the driver can see, that there is an engine, etc. Now, I ask > you to imagine something you have never experienced before (a randomized > input/counterfactual claim): What would a car with 5 wheels be like? We could > actually have a debate about how such a vehicle would work, arguing over the > likely placement of the fifth wheel, whether the other four wheels' > placements would be affected, and other design constraints. > > Other aspects we would likely all agree on without any debate or discussion; > we would automatically make the exact same assumptions and generalizations. > We would assume the car still had an engine, that it still had a front and > back, that there is a driver, that there are windows and mirrors for the > driver to see, there is an engine, etc. If anyone tried to say that having a > fifth wheel caused these other things to change, we would argue with them > unless they could provide a compelling link to having a fifth wheel, and if > none could be provided, we would disagree and say they were wrong. Let me say > this again, in other words: We would be making strong statements as to the > factuality or counterfactuality of claims made about something that does not > exist. (Yes, yes, there probably is such a car out there, but that's > irrelevant as long as the people discussing it have never seen or heard about > it.) > > The reason we can have an actual debate about the characteristics of an > imaginary object goes back to the first paragraph. We have observed rules and > correlations in real objects, and then we apply them to a scenario in which a > one or more well-defined counterfactual claims have been made. Then, we > proceed as if nothing is amiss, and require all further analysis to be > "reasonable", despite our strange starting point. This is the process of > filling in the gaps by using an advanced associative memory. To be creative > is simply this: Take the observed rules and correlations of a system, and > intentionally break one or more of them. Then see how everything else > reasonably shifts to accommodate these unusual points. > > I have read (and observed through personal experience) that the most creative > moments are those just before falling asleep and just after waking up. I also > get extremely creative when sleep deprived, provided I am not too tired to > think at all. Given the somewhat random quality of the dream state, and the > scrambling of thoughts when sleep deprived, this seems to fit well with the > creative process as I have described it. A randomization in my thought > processes produces an unexpected counterfactual hypothesis, and then I > attempt to reconcile this hypothesis with what I know to be true. This can be > done intentionally, too, by asking "What if?" questions. (What if clouds were > green? What if the universe were like a Mobius strip? What if we had a word > for the overwhelming urge to count things? What if chairs were all cushion > and had no legs or backs? What if we could keep chopping a shape into smaller > and smaller pieces to estimate its area, without ever stopping? What if > computer viruses had a counterpart in biological brains? What if intelligence > and creativity could be programmed into a computer and not just observed in > human minds? How would things work then?) > > There are other ways to be creative, but they all, to my knowledge, depend on > this idea of filling in the gaps of a counterfactual hypothesis with > reasonable assumptions based on previous observations. For example, I like to > take two completely unrelated systems, make an arbitrary analogy between > them, and then see what comes out. Occasionally the analogy can be resolved > reasonably, and I come out with a valuable insight or idea. (Compare > capitalism to reinforcement learning. Money = reward. They are both > distributed learning algorithms! How can I apply knowledge about each to the > other through this analogy?) ------------------------------------------- AGI Archives: https://www.listbox.com/member/archive/303/=now RSS Feed: https://www.listbox.com/member/archive/rss/303/21088071-f452e424 Modify Your Subscription: https://www.listbox.com/member/?member_id=21088071&id_secret=21088071-58d57657 Powered by Listbox: http://www.listbox.com