Matt Mahoney wrote:
Maybe because philosophy isn't real science, and Oxford decided FHI's
funding would be better off spent elsewhere. You could argue that
existential risk of human extinction is important, but browsing their
list of papers doesn't give me a good feeling that they have
If the fine structure constant was tunable across different hypothetical
universes how would that affect the overall intelligence of each universe? Dive
into that rabbit hole, express and/or algorithmicize the intelligence of a
universe. There are several potential ways to do that, some of