Evolving Morality: From Primordial Soup to Superintelligent Machines

A lab flask icon with the words "Science & Technology in Society" and a justice scale icon with the words "Ethics & Civics".

Joshua D. Greene
Gen Ed 1046    |    Spring 2025    |    Course Listing   |    Canvas Site
Monday & Wednesday, 1:30 PM – 2:45 PM

How can we understand the evolution of morality—from primordial soup to superintelligent machines—and how might the science of morality equip us to meet our most pressing moral challenges?

In this course we’ll examine the evolution of morality on Earth, from its origins in the biology of unthinking organisms, through the psychology of intelligent primates, and into a future inhabited by machines that may be more intelligent and better organized than humans. First, we ask: What is morality? Many people believe that morality descends from above, as divine commands or as abstract, timeless principles akin to mathematical truths. Here we take an empirical approach to morality, viewing it as a natural phenomenon that rises up from below—born of the strategic interactions among lifeforms and societies struggling to exist. Next, we take a scientifically informed look at the foundational questions of moral and political philosophy. Many people believe that the “is” of scientific knowledge has nothing to do with the fundamental “oughts” of morality, that science and morality exist in separate realms (and belong in separate courses). Here we challenge this assumption, asking whether our scientific self-knowledge can, and should, change our views about what’s right and wrong and how a society should be organized. Finally, we consider the distinctive moral challenges posed by what may be the next stage in Earth’s evolutionary history: the rise of artificial intelligence. Many people believe that there is and always will be a fundamental division between human minds and machines. Here we challenge this assumption, going beyond the tropes of science fiction and drawing instead on the latest advances in cognitive neuroscience and neurally inspired artificial intelligence. Our conclusions will have implications for moral challenges of the near and more distant future: Can self-driving cars, military drones, and life-like robots be programmed to behave morally? Will artificial intelligence displace human labor? If so, how can our societies adapt? Could machines displace humans entirely? If so, how can we stay in control? If machines do take over, will they be our conquerors or our children? Across diverse topics, this course explores the implications of a single idea: that the wonder we see around us, and ahead of us, is the product of competition and cooperation at increasing levels of complexity.