Imagine you're walking along a beach and see a plastic bottle. Two natural questions come to mind: How did it get here? and Who is responsible for this?
It was produced in a factory, then shipped, stocked on a shelf, bought by someone, used, and then thrown away. Then it wasn't properly collected or recycled, entered a river, traveled with the current, and eventually washed up here, at your feet.
Each step matters. If the bottle had never been produced, it wouldn't exist. If it had been disposed of properly, it might never have reached the ocean. If waste systems had worked differently, its journey could have ended elsewhere. In other words, the bottle is here because of a combination of multiple factors, not just one.
Now comes the more complicated part. Who should be held responsible for this bottle on the beach?
This is the question of causal responsibility. And even though many factors caused the problem, we don't necessarily see them all as equally responsible.
That plastic bottle at your feet is the end point of a long chain of various factors.
On this page we introduce the basics of the Theory of Actual Causality, which provides formal tools for making these intuitions precise.
For centuries, philosophers and legal scholars have relied on a simple idea to determine whether something is a cause: X caused Y if, but for X, Y would not have happened. In other words, remove the suspected cause from the story: if the effect disappears, causation is established; if the effect remains, it was not the cause. In legal practice this is known as the sine qua non test. For simple cases it works well — but does it always work? Let's see.
Consider a simple example. Someone drops a plastic bottle on the ground near a river. A municipal waste collector passes by but doesn't pick it up. The bottle washes into the river. Both actions were needed for the bottle to enter the environment — if it hadn't been dropped, there would be nothing to collect; if the collector had picked it up, it would never have reached the water. Let's represent this with a causal graph: a diagram where each node is a variable and each edge shows direct causal influence.
Here the structural equation is E = min(L, C) — a logical AND. Both conditions must hold for the effect to occur. The but-for test handles this perfectly: if the person hadn't littered, or if the collector had picked it up, the bottle would never have reached the river. Each one is individually a but-for cause.
However, in other examples the but-for test may fail.
Now consider a different scenario. Two factories sit along a river, upstream of a water treatment plant. Each factory dumps plastic waste into the river. Either factory's waste alone would be enough to block the plant's water intake. The plant shuts down.
The causal graph looks the same — two parents, one child — but the structural equation is different. Instead of AND (both required), we have OR (either sufficient):
This situation is called overdetermination: multiple sufficient causes converge, and each one is individually dispensable.
Note that while neither factory individually passes the but-for test, the set {A = 1, B = 1} does. But for both factories dumping, the plant would not have shut down. So, causes can be sets of variables, not just individual ones.
And this is not a toy problem. In environmental context, overdetermination is the norm. Ocean plastic does not come from a single source. Every country's waste is individually dispensable — the ocean would still be polluted without any single nation's contribution.
Consider another example. Two ships carrying plastic waste are heading toward the same harbour to dump their cargo illegally. Ship A arrives first and dumps its waste. The authorities detect the pollution and block the passage into the harbour. Ship B, arriving later, is turned away, so it cannot enter.
The harbour is polluted. Ship A's waste is floating in the water. Ship B never got the chance to dump. Yet here is the puzzle: had Ship A not dumped, the passage would have stayed open, and Ship B would have dumped its waste instead. The harbour would be polluted either way.
Apply the but-for test: but for Ship A's dumping, would the harbour be polluted? Yes — Ship B would have done it. So the but-for test says Ship A is not the cause. But that is clearly wrong: Ship A's waste is literally the waste in the water. Ship B's dumping is purely hypothetical — it never happened.
This is called preemption: one cause gets there first and blocks another one. The causal graph needs intermediate variables to make the blocking visible:
The middle row makes the blocking mechanism visible. Ship A wants to dump, so AH = 1 (A's waste is in the harbour). The authorities block the passage, which prevents Ship B from entering: BH = DB ∧ ¬AH = 0. Ship B wanted to dump (DB = 1) but couldn't get in (BH = 0).
The but-for test identifies the set of both variables — DA and DB — as the cause of pollution, despite the fact that the actual pollution came entirely from Ship A.
The problems above prompted philosophers and computer scientists to develop more precise definitions of causation. Several influential proposals have emerged over the past few decades, each capturing the right intuitions across the hard cases.
The most influential formal framework is the Halpern-Pearl definition (2001, revised 2015), which introduces a witness set — a set of variables held fixed at their actual values — to reveal when a factor is decisive.
A closely related approach is the NESS test (Wright 1985, formalised by Beckers): a factor is a cause if it was a Necessary Element of a Sufficient Set for the outcome. Rather than counterfactual dependence, it asks whether the factor was part of a package of conditions that together were enough to produce the harm. Both definitions agree on most real-world cases; they diverge only on subtle edge cases that legal and moral philosophers continue to debate.
The question of causation is binary: yes or no. But the responsibility comes in degrees. A lone polluter bears more causal weight than one of a thousand.
Chockler and Halpern (2004) proposed the first graded measure of causal responsibility. The idea is elegant: how close was variable X to being a cause? If X is already a cause (removing X alone changes the outcome), it has maximal responsibility. If other things must also change before X becomes pivotal, its degree of responsibility decreases.
When k = 0, X is already pivotal and bears full responsibility: dr = 1. When k = 5, five other variables must change before X becomes the tipping point: dr = 1/6. The larger the coalition shielding X from pivotality, the lower its individual responsibility.
Imagine a parliament votes on whether to allow plastic waste exports. The policy passes by majority. But how responsible is each one?
Try a 51–50 vote: each "yes" voter has dr = 1 (fully pivotal). Try 101–0: each has dr = 1/51 (deeply redundant). This is the mathematical signature of diffusion of responsibility.
Responsibility deals with the actual world, blameworthiness deals with the agent's knowledge. An agent who did not and could not know the consequences of their actions is less blameworthy, even if their responsibility is high. Conversely, an agent who knowingly chose a harmful alternative when a safer option existed is more blameworthy.
The concept has been developed across three key papers. Chockler & Halpern (2004) introduced the first formal notion: the degree of blame as the expected degree of responsibility over an agent's epistemic state — the probability distribution over causal settings the agent considers possible. If you don't know your plastic will leak, your blame is lower than your responsibility.
Halpern & Kleiman-Weiner (2018) refined the definition for a single agent by incorporating two additional factors: how much the agent's action raised the probability of harm compared to an alternative action, and how costly that alternative would have been. If a costless alternative existed, blame is high; if avoiding harm was expensive or difficult, blame is mitigated.
Friedenberg & Halpern (2019) extended the framework to multi-agent settings — exactly the kind we face with ocean plastic. When thousands of agents each contribute a tiny amount, any one individual's contribution is near zero, so single-agent definitions assign near-zero blame to everyone. The group is clearly blameworthy, yet no individual is. Friedenberg & Halpern solved this by first defining group blameworthiness and then dividing it among individuals using the Shapley value — a game-theoretic tool that assigns each agent a share based on their marginal contribution across all possible coalitions. They proved the Shapley value is the unique allocation satisfying three natural axioms: individual blames sum to the group blame, symmetric agents receive equal blame, and agents with larger marginal contributions receive more blame.
Two companies dump pollutant into a river. If the total reaches the lethal threshold k, the fish die. The companies don't know the exact threshold — they only have beliefs about its range. Degree of blame is the expected degree of responsibility over those beliefs. In counterfactual scenarios, each company either dumps its full amount or nothing at all.
Try narrowing the belief range to see how gaining knowledge changes blame. A company that studies the ecosystem and learns k is low gets a different blame score than one that remains ignorant.
This is where information matters morally. A consumer who has read the Lobelle et al. (2024) and Navarre et al. (2024) data knows that a significant portion of Dutch plastic is exported and that export destinations have high leakage rates. This consumer has a different epistemic state, and therefore higher blameworthiness, than an uninformed consumer, even though their actions and their causal responsibility are identical.
The asymmetry of knowledge. A municipality that commissions a material-flow study and discovers its exports leak to the ocean cannot return to a low-blame state of ignorance. The study itself changes the epistemic state, which changes blameworthiness for all subsequent decisions. This is why the Lobelle et al. (2024) and Navarre et al. (2024) studies are not merely descriptive — they are morally consequential.
We can now say who caused what, and how much responsibility they bear. The final step is measuring the harm itself — not as a vague notion of damage, but as a precise quantity grounded in the causal structure.
Beckers, Chockler, and Halpern (2024) proposed a definition of harm that avoids the traps that plague simpler accounts. Their approach introduces two key ingredients on top of the structural causal model:
The default utility d is crucial. It answers the question: what outcome was the agent entitled to expect? For ocean ecosystems, a natural default might be the pre-industrial plastic load (essentially zero). For a municipality, it might be the leakage rate achievable with current best-practice waste management.
The contrastive structure is what makes this definition work. It doesn't just ask "is the outcome bad?" (H1). It asks "is there a specific alternative action that would have produced a better outcome through a genuine causal pathway?" (H2). And it checks that the alternative isn't itself harmful (H3).