Publications

In progress

Lie-Panis, J., Fitouchi, L., Baumard, N. & André, J.-B. Why human societies adopt rigid moral rules: The efficiency–robustness trade-off. Under review.
View Abstract
Humans are capable of remarkably flexible moral judgment. Yet societies rely on rigid rules—obligations and prohibitions that apply categorically, even when case-by-case reasoning could yield better outcomes. Why would a species capable of such flexibility bind itself to inflexible rules? We propose that rigid rules arise as social technologies for managing ambiguity around noncooperation. People often have legitimate reasons for failing to cooperate, yet those reasons are typically opaque to observers, allowing opportunists to disguise selfishness as justified hardship. We formalize this idea with an evolutionary game-theoretic model. Two cooperative equilibria emerge: a flexible norm that accommodates legitimate excuses but is vulnerable to exploitation, and a rigid norm that closes this loophole by mandating cooperation even when inefficient. Comparing these equilibria reveals an efficiency–robustness trade-off: flexibility maximizes welfare when trust is secure, whereas rigidity preserves cooperation when trust is fragile. This explains why rigid rules prevail in low-trust settings—interactions with strangers, formal institutions, or tight societies—where flexibility is more common in high-trust contexts.
Lie-Panis, J., André, J.-B. & Miton, H. Professionals of trust: Intermediaries concentrate exchanges, creating incentives for reliability.
View Abstract
Trade is most profitable when partners are distant, but also most fragile, because distance makes it harder to trust that others will keep their side of the bargain. When exchange stretches beyond local ties, societies often rely on intermediaries—merchants, brokers, auctioneers, and other professionals who stand between buyers and sellers. At first glance this solution is paradoxical: why would inserting a third party create trust? Intermediation adds another link in the chain—another stranger who can misrepresent quality, divert goods, or renege on payment. We propose that intermediation creates trust by concentrating exchanges onto fewer reputations, raising the cost of any single failure and strengthening incentives for reliable conduct. We formalize this idea in a model of reputation-based cooperation, involving buyers, suppliers, and intermediaries. The model yields two trade regimes: direct buyer–supplier exchange, where suppliers build reputations with buyers, and intermediated exchange, where intermediaries broker transactions and build reputations with buyers instead. Comparing the two, we show that intermediation expands the scope of trade—and that this extension can arise purely from scarcity: as intermediaries become fewer, each handles more transactions, raising the reputational cost of unreliable trade. Intermediaries, in this view, are professionals of trust: they can be relied on because they have the most to lose.
Lie-Panis, J., André, J.-B. & Miton, H. Professionals of trust: Intermediaries concentrate exchanges, creating incentives for reliability.
View Abstract
Le Pargneux, A., Lie-Panis, J., Cashman, M. & Cushman, F. Modeling fairness judgments of splits between multiple parties.
View Abstract
Fair divisions are a fundamental problem for moral cognition. Past experimental work has provided evidence in support of three principles of fairness in divisions between two parties: proportional splits, equal splits, and splits that equalize net gains (the Nash bargaining solution). When the number of parties increases, so does the complexity of such decisions, potentially influencing people’s cognitive strategies and their reliance on precise explicit heuristics vs intuitive approximations. We design a novel task in which participants can easily and intuitively sample and select among various distributions of resources between multiple people (2 to 18) by adjusting a continuous slider that updates divisions in real time. In two preregistered experiments (n = 378; 2,268 choices), we quantitatively model participants’ fairness judgments at the individual level. We find that participants can be categorized into three main groups. Overall, around 50% are best fitted by proportionality, 40% by the Nash bargaining solution, and fewer than 10% by equality. At the aggregate level, proportionality and the Nash bargaining solution perform best. Fairness judgments remain stable as the number of involved parties increases. When they only have access to the slider (Experiment 1), participants best fitted by the Nash bargaining solution seem to intuitively approximate it, but in an imprecise way. By contrast, they tend to precisely select it when three buttons (one for each model, Experiment 2) are available to automatically adjust the slider. These results suggest that a substantial proportion of participants rely on intuitive approximations for the Nash bargaining solution, consistent with bargaining-based (contractualist) theories of fairness.
Lie-Panis, J. & Hilbe, C. Cooperation investments: Building capacity and signaling intent.
View Abstract
People do not just cooperate — they regularly invest in their cooperative abilities. Partners learn to resolve conflicts, colleagues train for their roles within large teams, and people adopt the shared practices of their community. Often, these activities do not benefit others per se. Rather they facilitate future cooperative interactions. Yet such cooperation investments are rarely captured in evolutionary game theory, which focuses on cooperative decisions instead of the preparatory efforts that enable them. Here, we develop a model that allows individuals to incur upfront costs to lower their future cooperation costs. Through mathematical analysis and evolutionary simulations, we show that cooperation investments serve a dual function. First, they allow individuals to transform into more effective partners, enabling cooperation where it would otherwise be too costly. Second, they can serve as honest signals of cooperative intent. When their costs deter would-be cheaters, investments allow observers to identify trustworthy partners, further expanding the cooperative domain. Our model helps to explain why people are attentive to others’ cooperation investments. When someone takes the time to learn more about a partner's hobbies or a community's norms, they do more than build capacity — they signal an intent to cooperate with us.
Walls, L., Celen, E., Lie-Panis, J., Collins, K., Tenenbaum, J., Jacoby, N. & Levine, S. How universal is universalization? Exploring the use of universalization in norm violation across the globe.
View Abstract
How do people know when it is permissible to break a rule? Sometimes people _universalize_, asking "What if everyone felt at liberty to violate the rule?" While there is mounting evidence that universalization guides rule-breaking judgments, this evidence is limited to participants who are English-speaking, United States residents, leaving open the question of how universal universalization actually is. Moreover, geography and identity have important influences on morality and cultures differ widely in the stringency with which they adhere to rules. In this paper we use a language-agnostic, video-game paradigm to investigate whether universalization guides rule-breaking judgments in 20 countries across the globe (n=2,652 participants) and find that universalization plays an important role in moral judgment in every one. However, the strength of universalization varies. While cultures may vary dramatically in how strongly they adhere to norms, the underlying _logic_ of norm breaking appears remarkably consistent.

Journal articles

Lie-Panis, J., Baumard, N., Fitouchi, L. & André, J. B. (2024). The social leverage effect: Institutions transform weak reputation effects into strong incentives for cooperation. Proceedings of the National Academy of Sciences.
View Abstract
Institutions allow cooperation to persist when reciprocity and reputation provide insufficient incentives. Yet how they do so remains unclear, especially given that institutions are themselves a form of cooperation. To solve this puzzle, we develop a mathematical model of reputation-based cooperation in which two social dilemmas are nested within one another. The first dilemma, characterized by high individual costs or insufficient monitoring, cannot be solved by reputation alone. The second dilemma, an institutional collective action, involves individuals contributing to change the parameters of the first dilemma in a way that incentivizes cooperation. Our model demonstrates that this nested architecture creates a leverage effect. While insufficient on its own to incentivize cooperation in the first dilemma, reputation incentivizes contributions to the institutional collective action, which, in turn, strengthen the initially weak incentives for cooperation in the first dilemma. Just as a pulley system transforms minimal muscular strength into significant lifting capability, institutions act as cooperative pulleys, transforming initially weak reputational incentives into powerful drivers of cooperative behavior. Based on these results, we suggest that institutions have developed as social technologies, designed by humans to exploit this social leverage effect, just as material technologies are designed to exploit physical laws.
Lie-Panis, J., & Dessalles, J. L. (2023). Runaway signals: Exaggerated displays of commitment may result from second-order signaling. Journal of Theoretical Biology.
View Abstract
To demonstrate their commitment, for instance during wartime, members of a group will sometimes all engage in the same ruinous display. Such uniform, high-cost signals are hard to reconcile with standard models of signaling. For signals to be stable, they should honestly inform their audience; yet, uniform signals are trivially uninformative. To explain this phenomenon, we design a simple model, which we call the signal runaway game. In this game, senders can express outrage at non-senders. Outrage functions as a second-order signal. By expressing outrage at non-senders, senders draw attention to their own signal, and benefit from its increased visibility. Using our model and a simulation, we show that outrage can stabilize uniform signals, and can lead signal costs to run away. Second-order signaling may explain why groups sometimes demand displays of commitment from all their members, and why these displays can entail extreme costs.
Lie-Panis, J., & André, J. B. (2022). Cooperation as a signal of time preferences. Proceedings of the Royal Society B: Biological Sciences.
View Abstract
Many evolutionary models explain why we cooperate with non-kin, but few explain why cooperative behaviour and trust vary. Here, we introduce a model of cooperation as a signal of time preferences, which addresses this variability. At equilibrium in our model (i) future-oriented individuals are more motivated to cooperate, (ii) future-oriented populations have access to a wider range of cooperative opportunities, and (iii) spontaneous and inconspicuous cooperation reveal stronger preference for the future, and therefore inspire more trust. Our theory sheds light on the variability of cooperative behaviour and trust. Since affluence tends to align with time preferences, results (i) and (ii) explain why cooperation is often associated with affluence, in surveys and field studies. Time preferences also explain why we trust others based on proxies for impulsivity, and, following result (iii), why uncalculating, subtle and one-shot cooperators are deemed particularly trustworthy. Time preferences provide a powerful and parsimonious explanatory lens, through which we can better understand the variability of trust and cooperation.

Public outreach

Lie-Panis, J. (2026). Guarding the guardians.
Lie-Panis, J., Fitouchi, L. & André, J.-B. (2025). Institutions et coopération.

Other

Commentary on Glowacki : Lie-Panis, J. & André, J.-B. (2024). Peace is a form of cooperation, and so are the cultural technologies which make peace possible. Behavioral and Brain Sciences.
PhD Dissertation : Lie-Panis, J. (2023). Models of reputation-based cooperation. Bridging the gap between reciprocity and signaling. Université Paris Cité.