La science économique au service de la société

First Paris Conference | Frontiers of Economics and Philosophy | 16-17 mai

PNG - 282.7 ko

Paris School of Economics a le plaisir de vous inviter à la première conférence sur le thème « Frontiers of Economics and Philosophy ».

  • Dates : Jeudi 16 et vendredi 17 mai
  • Lieu : Paris School of Economics
    48 boulevard Jourdan, 75014 Paris, salle R2-01

Présentation

Nous ne vivons pas, hélas, dans un monde où Arrow, Rawls et Sen peuvent donner un séminaire commun à des étudiants de troisième cycle, comme ils l’ont fait en 1969. Cependant, le travail de pionnier de cette « trinité », parmi beaucoup d’autres, a inspiré la création d’un domaine productif de recherche universitaire qui se situe à l’intersection de l’économie et de la philosophie. Cette combinaison de l’économie et de la philosophie s’est avérée puissante et fructueuse pour les deux disciplines, tout en ayant la prétention de constituer une discipline universitaire à part entière. Mais qu’est-ce qui, à l’heure actuelle, se trouve à la frontière de la recherche à l’intersection de l’économie et de la philosophie ?

La conférence « Frontiers of Economics and Philosophy » vise à répondre à cette question. Plus précisément, la conférence vise à rassembler des chercheurs à différents stades de leur carrière qui travaillent à l’intersection de l’économie et de la philosophie afin qu’ils partagent leurs recherches.

Cet évènement est organisé dans le cadre du projet « Économie - Philosophie » de l’initiative Ouvrir la science économique.

Programme

Jeudi 16 mai

08:50-09:00 - Accueil et introduction par Marc Fleurbaey (PSE, CNRS)

09:00-09:40 - Akshath Jitendranath (PSE)
What are we talking about when we are talking about hard choices ?

On one view---call this the classical view---a hard choice is a situation that condemns an agent to failure. Specifically, hard choices condemn an agent to failure because rational agents must act in a way that can be justified, but they cannot do so when facing a hard choice. They cannot do so because, in a hard choice, an agent’s given reasons for action will fail to determine what, all things considered, the agent ought to do. Further, the claim that an agent’s given reasons for action will fail to determine what they ought to do is characterized by the fact that the agent cannot optimize, or go for the best, in the situation that they find themselves in. The objective of this paper is to defend this view of hard choices, and my defense involves presenting an argument by elimination. That is, I consider and « eliminate » alternatives to the classical view of hard choices by showing that they fail.

One alternative view---call this the revisionary view---posits that hard choices show us that rational choice theory is conceptually ill-equipped to study practical reasoning in general, and hard choices in particular. The entire paradigm should, therefore, be jettisoned or revised. Against this I argue that not only is this conclusion wrong. Almost the exact opposite may be true. The revisionary view of hard choices has been developed with an insufficient appreciation of rational choice theory. Yet another alternative view---call it the view from rational choice theory---would deny that the possibility of making a rationally justified choice is precluded situations where one cannot optimize. Proponents of this view include Isaac Levi and Amartya Sen who argue that in hard choices we can invoke decision rules that are weaker than the set of optimal or best elements. This, however, begs the question in favor of these weaker decision rules. Why, we may ask, are these weaker decision rules justified as a basis of rational decision making ? By focusing on Levi’s decision rule, I show that it entails a controversial axiological position as it precludes the organic unity of value.

09:40-10:20 - Roberto Veneziani (Queen Mary University of London)
(with Stan Cheung and Marco Mariotti)
The Hard Problem and the Tyranny of the Loser

A Hard Problem is a collective choice problem in which the only feasible alternatives apart from the status quo consist of a welfare gain to some people (the Winners) and a welfare loss to the others (the Losers). These problems are typical in a number of settings, such as climate action, anti-trust regulation, and tax design. We study how to make collective choices when faced with Hard Problems. We find that requiring a relatively weak fairness condition, which we call Expansion Solidarity, necessarily leads to a dictatorship of the Losers, no matter how small their number. Even one single Loser must be given the power to veto any departure from the status quo, regardless of the number of Winners, how large the gains, or how small the loss.

10:20-11:00 - Chrisoula Andreu (The University of Utah)
Choosing Well : The Good, the Bad, and the Trivial

Few, if any, of us are complete strangers to the at least occasional frustration of having proceeded in a way that was self-defeating. My talk (which is based on my book of the same name) focuses on how challenging choice situations and messy preference structures can lead to self-defeating patterns of choice over time. The relevant patterns are associated with being tempted or torn and include cases of procrastination. Theories of rational choice often dismiss or abstract away from the sort of messiness that I focus on. They assume that rational agents can and should have neat preferences over their options ; but this assumption is problematic. Ultimately, I show that rationality can validate certain messy or ‘disorderly’ preference structures while also protecting us from self-defeating patterns of choice.

11:00-11:15 - Break

11:15-11:55 - Ralf Bader (Université de Fribourg)
Probabilistic exploitation

This paper develops a probabilistic version of exploitation to which agents with incomplete preferences are susceptible that gets arbitrarily close to a forcing money pump. By repeatedly presenting such agents with the possibility of choosing an option that is not dominated by available alternatives due to non-comparability, but that is counterfactually dominated by an option that could have been reached had the agent chosen differently at a prior node, one can increase the probability of the agent ending up with a sub-optimal outcome in such a way that one can arbitrarily closely approximate a forcing money pump. Whilst myopic agents can be induced to incur some almost-sure losses, the size of which is determined by the extent of non-comparability, the use of backwards induction classifies pure strategies as not feasible and thereby allows for probabilistic exploitation to be iterated, such that a sophisticated agent can be induced to incur an almost-sure loss of any magnitude.

11:55-12:35 - Petra Kosonen (The University of Texas at Austin)
Bounded Utilities and Ex Ante Pareto

This paper shows that decision theories on which utilities are bounded, such as standard axiomatizations of Expected Utility Theory, violate Ex Ante Pareto if combined with an additive axiology, such as Total Utilitarianism. A series of impossibility theorems point toward Total Utilitarianism as the right account of axiology, while money-pump arguments put Expected Utility Theory in a favorable light. However, it is not clear how these two views can be reconciled. This question is particularly puzzling if utilities are bounded (as standard axiomatizations of Expected Utility Theory imply) because the total quantity of well-being might be infinite or arbitrarily large. Thus, there must be a non-linear transformation from the total quantity of well-being into utilities used in decision-making. However, such a transformation leads to violations of Ex Ante Pareto. So, the reconciliation of Expected Utility Theory and Total Utilitarianism prescribes prospects that are better for none and worse for some.

12:35-13:30 - Lunch

13:30-14:10 - Constanze Binder (Erasmus University Rotterdam)
TBC

14:10-14:50 - Marina Moreno (Ludwig Maximilian University of Munich)
On the Possibility of Desire-Based Instrumental Rationality

The Humean tradition understands instrumental rationality, and rationality altogether, as desire-based. That is, it holds that our ends are exclusively given by our desires and (instrumental) rationality is merely concerned with telling us how to achieve them. Both because and despite its simplicity, desire-based instrumental rationality arguably still has the status of a default theory, even if only as a dialectical opponent to be overcome. For instance, decision theory is usually understood to be fundamentally concerned with how agents can make choices that best fulfill their desires, given the constraints and information available to them. I aim to investigate the plausibility and possibility of a coherent theory of desire-based instrumental rationality. To this end, I introduce three constraints for a theory of desire-based instrumental rationality : any such theory must be desire-based, universal and non-trivial in a precise sense to be explained. I then explore the relationship between these requirements and demonstrate that they stand in strong tension with one another. I show that they characterise exactly one particular theory ; a theory I call « Pareto for Desires ». I consider an objection to Pareto for Desires according to which it is too permissive. I argue that there are reasons to believe that this objection can be overcome. Yet, I argue that if it cannot, then there is no plausible coherent theory of desire-based instrumental rationality, turning the possibility into an impossibility.

14:50- 15:00 - Break

15:00-15:40 - Belle Wollesen (The London School of Economics and Political Science)
Models, Measurement and Manipulability

Research around voting rules treats the absence of possibilities for strategic voting as normatively highly desirable. Yet, there is little substantial discussion concerning what constitutes the harm of strategic voting. I show that this lacuna has far-reaching consequences for the very tools that evaluate voting rules in forms of measures of manipulability. While there is widespread agreement that we should seek to design systems that are resistant to manipulation and hence have a lower degree of manipulability, there is little agreement what the right measure of manipulability is.
Furthermore, disagreement typically centers on the more accurate idealizations of the target system. However, manipulability, as a ’thick’ concept, is evaluative by nature, incorporating both descriptive and normative dimensions.

This paper aims to highlight the normative dimension of different solutions to prevent manipulation. For this I will start by different ways that one may legitimize voting and in which way manipulation may undermine this legitimization. We then evaluate if measures of manipulability are well-equipped to track these harms. To this aim, the paper focuses on two ways to measure manipulability : the Nitzan-Kelly index and the practice of evaluating the complexity class of strategic voting given a certain social choice rule.

I conclude that, just as the legitimization of voting can vary by context—and thus the potential harm of manipulation— so does what may be the appropriate measure of manipulability.

15:40-16:20 - Erica Celine Yu (Erasmus University Rotterdam)
Duty or betrayal : Changing minds and deliberative representation

When representative committees enter into deliberations, they seem to be faced with two opposing forces. On the one hand, they should be able to transform their judgments away from their constituents’ in order to reap the epistemic benefits of deliberation. On the other hand, they have a responsibility to their constituents to make their views present and fight for them in deliberations. Should representatives in deliberations then be delegates who simply follow the instructions of their constituents, or trustees who exercise their own independent judgment ? I follow Urbinati (2000) in arguing that both of these roles are essential for a representative as an advocate to play in deliberations : they should have both a passionate link to their electors’ cause and relative autonomy of judgment. The question I then aim to answer in the paper is the following : How can representatives effectively participate in rational deliberations while being good advocates for their constituents ? In order to operationalize the normative standard for dynamic deliberative representation in Urbinati’s theory of representation as advocacy, I use List’s (2011) model of deliberation as judgment transformation. Specifically, I argue that a judgment transformation function should satisfy four axioms of rational deliberation, and two new axioms of representative deliberation which capture the normative standard provided by the theory of representation as advocacy. I then show how a specific class of judgment transformation functions, value-constrained revision functions, satisfies these six axioms of rational and representative deliberations.

16:20-16:30 - Break

16:30-17:10 - Silvia Milano (The London School of Economics and Political Science)
TBC

17:10-17:50 - Alexander Tolbert (Emory University)
Causal Agnosticism About Race : Variable Selection Problems in Causal Inference

This paper proposes a novel view in the the philosophy of race & causation literature known as “causal agnosticism” about race. Causal agnosticism about race implies that it is reasonable to refrain from making judgments about whether race is a cause. The paper’s thesis asserts that certain conditions must be met to infer that something is a cause, according to the fundamental assumptions of causal inference. However, in the case of race, these conditions are often violated. By advocating for causal agnosticism, the paper suggests a more modest approach to understanding the role of race in causal relationships.

Vendredi 17 mai

09:00-09:40 - Stephane Zuber (PSE, CNRS)
(with Mark Budolfson and Dean Spears)
Separable Social Welfare Evaluation for Multi-Species Populations

If non-human animals experience wellbeing and suffering, then such welfare consequences arguably should be included in a social welfare evaluation. Yet economic evaluations almost universally ignore non-human animals, in part because axiomatic social choice theory has not characterized multi-species social welfare functions. Here we propose axioms and characterize a range of functional forms to fill this gap. Among these, we identify a new characterization of additively-separable generalized (multi-species) total utilitarianism. We provide examples to illustrate that non-separability across species is implausible in a multi-species setting, in part because good lives for different species are at very different welfare levels.

09:40-10:20 - Michael Livermore (University of Virginia Law School)
Heteric Welfarism : Intuitions and Puzzles

Heteric welfarism is the view that the diversity of subjective experience is morally significant, such that worlds with a greater variety of experiences are better, ceteris paribus, then worlds with less variety of subjective experience. Heteric welfarism vindicates intuitions that species extinction (at least of complex, sentient organisms) is a special loss, above and beyond effects on individual well-being. It also provides reasons to protect endangered cultures and ways of life. Questions raised by heteric welfarism include its interaction with the Pareto principle, how to best treat rare but negative experiences, and concerns related to the leveling down objection and the repugnant conclusion. This talk will describe heteric welfarism, discuss some of its motivating intuitions, and explore some of the puzzles that it raises.

10:20-11:00 - Susumu Cato (The University of Tokyo)
Population ethics with thresholds

We propose a new class of social quasi-orderings in a variable-population setting. In order to declare one utility distribution superior to another, the critical-level utilitarian value of the former must surpass the value of the latter by a positive margin. For each possible absolute value of the difference between the population sizes of two distributions to be compared, we specify a non-negative threshold level and a threshold inequality. This inequality indicates whether the corresponding threshold level must be reached or surpassed in order to establish betterness. All of these threshold critical-level utilitarian quasi-orderings perform same-number comparisons by means of the utilitarian criterion. In addition to this entire class of quasi-orderings, we axiomatize two important subclasses. The members of the first subclass are associated with proportional threshold functions, and the well-known critical-band utilitarian quasi-orderings are included in this subclass. The quasi-orderings in the second subclass employ constant threshold functions ; the members of this second class have, to the best of our knowledge, not been examined so far.

11:00-11:15 - Break

11:15-11:55 - Franz Dietrich (PSE, CNRS), Kai Spiekermann (The London School of Economics and Political Science)
TBC

11:55-12:35 - Marcus Pivato (Centre d’Économie de la Sorbonne)
Social aggregation of conditional beliefs

A “conditional probability system” (CPS) is a mathematical structure that determines what an agent’s probabilistic beliefs would be, conditional on some possible background information. Formally, let S be a space of possible states of nature, let A be a Boolean algebra of subsets of S, and let B be an arbitrary subset of A, not containing the empty set. A CPS is a real-valued function p defined on the Cartesian product of A and B. For any event a in A and any “background knowledge” b in B,p(a|b) is interpreted as the probability of a, conditional on b.

In the simplest cases, a CPS is entirely determined by a standard probability distribution, via Bayes rule : If S itself is an element of B, and p(b|S)>0, then p(a|b)=p(a|S)/p(b|S) for all a in A. But a CPS allows us to meaningfully define p(a|b) even if p(b|S)=0. Furthermore, a CPS can be defined even when S itself is not an element of B. Both of these features are useful in many applications.

The theory of “belief aggregation” considers how to combine the beliefs of two or more experts into a collective belief. These beliefs are typically represented as probability measures. Instead, I consider the aggregation of conditional probabilities. This generalization allows us to escape some of the antinomies which plague the classical theory of probabilistic belief aggregation, but it also introduces surprising new phenomena, and has an appealing “epistocratic” interpretation. In addition to pure belief aggregation, I apply this to Bayesian social aggregation.

12:35-13:30 - Lunch

13:30-14:10 - Marc Fleurbaey (PSE, CNRS)
Beyond Social Justice

The good life has been considered an issue to be sidelined in theories of justice but this is questionable because collective life includes choosing the quantity and mix of various public goods which determine an important part of the substance of individual lives. Thus, it is impossible to separate the right from the good, and inappropriate to consider the latter a matter of private morality. Moreover, arguably it is important for humanity to deliberate about its collective destiny and projects with at least as great a priority as seeking justice in its institutions. A related issue is whether there,is value in humanity’s collective achievements that goes beyond the value such achievements imply for individuals.

14:10-14:50 - Mark Budolfson (The University of Texas at Austin)
Cost-benefit analysis, externalities, monetization, different populations

I examine arguments for specific extensions or alternatives to traditional economic approaches to policy for dealing with externalities that impact different populations.

14:50-15:00 - Break

15:00-15:40 - Natalie Gold (The London School of Economics and Political Science)
TBC

15:40-16:20 - Jean-Francois Laslier (PSE, CNRS), Ingela Alger (Toulouse School of Economics)
(Partial) Universalization Ethics : A Formalization and Some Applications

The talk develops a model of ethical citizens, by positing Kantian or semi-Kantian preferences, whereby behavior is evaluated in light of what the outcome would be, should a fraction of the other citizens choose the same course of action. Some results emanating from models with such citizens are also discussed.

16:20-16:30 - Break

16:30-17:10 - Enrico Mattia Salonia (Toulouse School of Economics)
Welfarist Meritocracy

I develop a framework to conceptualise different understandings of meritocracy. A meritocracy is characterised by a metric of merit and a related reward system. Individuals obtain a higher reward when they score higher on the metric of merit. I focus on a strictly welfarist understanding of these two elements. An individual’s action scores higher in the metric of merit than another if it leads to a pareto improvement in welfare. The reward for merit is individual welfare. I show that, under these two assumptions, for any collective action profile, there is a meritocratic reward system implementing it. I thus argue that meritocracy is a vacuous allocation rule when conceptualised through a purely welfarist lens. As a result, I propose that meritocracy should be viewed as a fundamentally non welfarist criterion. One between the metric of merit and the reward system should be unrelated to welfare.

17:10-17:50 - Raphael Gomes De Oliveira (CY Cergy Paris Université)
Impartial Observer Theorem with Ambiguity

This paper revisits the debate between Harsanyi and Rawls on collective decision-making with impartiality. Harsanyi argued for judging resource distributions based on expected utility, while Rawls argued for a maxmin rule (Difference Principle). Both of them relied on some kind of uncertainty to model their Veil of Ignorance. We propose a version of Harsanyi’s Impartial Observer Theorem that introduces ambiguity, adapting a model by Grant et al. (2010) to an Anscombe and Aumann (1963) setup. By assuming the independence between the uncertainty over the distribution of identities (faced by the Impartial Observer) and the uncertainty over the distributions of outcomes (faced by each individual), we can introduce ambiguity on both identity acts and outcome acts, in a multi-prior setup like Gilboa and Schmeidler (1989) maxmin model. Using different combinations of uncertainty and ambiguity on both levels, we show that there are intermediate cases between Harsanyi’s and Rawls’ Social Welfare Functions that depend on how impartiality is modeled.


La Chaire Ouvrir la science économique doit permettre aux économistes de répondre de façon originale et efficace aux grands problèmes de notre temps, en intégrant un double constat : les défis actuels, complexes et multiformes, nécessitent une approche dépassant les frontières disciplinaires ; la recherche en économie doit être davantage renouvelée par les avancées dans des disciplines connexes.