Ever worried that AI will wipe out humanity? Ever dreamed of merging with AI? Well these are the primary concerns of transhumanism and existential risk, which you may not have heard of, but whose key followers include Elon Musk and Nick Bostrom, author of Superintelligence. But Joshua Schuster and Derek Woods have pointed out that there are serious problems with transhumanism’s dreams and fears, including its privileging of human intelligence above all other species, its assumption that genocides are less important than mass extinction events, and its inability to be historical when speculating about the future. They argue that if we really want to make the world and its technologies less risky, we should instead encourage cooperation, and participation in social and ecological issues.