Simulating Moral Communities
Before I started this fellowship, I was all but certain of what the outcome of my research would be. I was wrong. But in order to understand how my prediction was wrong I should first explain the goal of the project.
In short, the goal of my research was to implement a proof of concept for a new methodology for doing moral philosophy, one that borrowed heavily from the field of complexity science. The idea was, if I could create a rudimentary (“agent-based”) simulation of social interactions (modeled as a prisoner’s dilemma exchange) within a population who all follow the same moral theory, then if that population self-destructs we could safely conclude (within the context of the simulation) that that particular moral theory was self-defeating.
More concretely, a prisoner’s dilemma is a scenario where two agents (or prisoners) can either “cooperate” or “defect” with one another. If one agent decides to cooperate, they will lose, say, +1 resource point (the more resources you have the higher the chance of reproducing). If that agent is lucky and their counterpart also cooperates, then they will gain +3 resources points. So if both parties cooperate, both parties walk away with +2 resource points (or a lower prison sentence, whichever you prefer). Crucially, however, each agent doesn’t know in advance what the other will do. Below is a visualization, called a “game theory matrix,” of how each possible scenario can play out in the more traditional scenario of two prisoners cooperating with or defecting from their partner in crime: