Lecturers Abstracts

Anna ALEXANDROVA, University of Cambridge

Philosophy of economics after the empirical turn

Contemporary economics is quite different from the economics that initially inspired the founders of philosophy of economics. It is more invested in finding interesting instrumental variables than it model building. It is less monolithic in its politics. It is driven by new data technologies. What do these changes imply for how we ought to teach and to research philosophy of economics? This talk reflects on the dilemma we face between going with the times and not losing sight of the classic issues in our field.



Jean BACCELLI, Oxford University

The social choice theory of spurious unanimity

Spurious unanimities threaten the possibility of social choice under uncertainty. This has been argued to reveal that collective evaluations require knowing more than preference data. I introduce and defend an alternative interpretation, according to which preference information can suffice but not be restricted to a given pair of options independently of all others. This interpretation, which fits recent social choice theory, sheds light on the interplay between individual and collective rationality.



Chris CLARKE, Erasmus University Rotterdam

Does Econometrics Rest on a Mistake?

Policymakers need information about counterfactual conditionals, and econometric techniques claim to be able to evaluate these counterfactuals. But these econometric techniques presuppose (without argument) that there is a close connection between counterfactuals and randomization. And several scholars (most notably Cartwright) have called this connection into question. Indeed, some scholars (Heckman and Hoover) have offered alternative ways of evaluating counterfactuals that reject this close connection with randomization. However, the philosophical debate over this connection is at an impasse. I show how to resolve this debate in a principled and practically-orientated way, namely by “conceptually engineering” the concept of a counterfactual so that it is as useful as possible for policy-making. The result is that there is indeed a close connection between counterfactuals and randomization. 



Judith FAVEREAU, University of Lyon 2

The paradoxes of poverty: history and methodology of poverty measurement

Poverty has an abundance and diversity of statistical measures. Yet, despite this diversity, when poverty is measured several paradoxes emerge, leading to refinements of its measurement. Sketching the history of poverty measurement can be seen as drawing the history and resolution of these paradoxes. Nonetheless, one paradox appears persistent for most of the poverty measures: the so-called mortality paradox. The more poverty kills, the fewer poor people there are, and the lower the poverty measure is. This talk aims to place mortality as a crucial dimension of poverty measurement. To this end, I primarily draw on Amartya Sen’s works on both mortality and development ethics. First, I show how mortality data reveal crucial privations that are invisible with common poverty measures. This raises a central methodological question on measurement in general about what numbers reveal or hide. For instance, on the one hand for international comparison, one might prefer a thin description. However, such a description might conceal crucialpoverty characteristics (e.g. privations).  On the other hand, one might favor revealing those crucial characteristics, which might make the measure heavy, thus hardly transposable. The question becomes what to decompose and how much. Mortality data encapsulates several privations, which also emphasize how people live. Consequently, it might highlight what to focus on and how to do so, without being unduly burdensome. Secondly, mortality data combined with participatory approaches can be mobilized as an ethical tool. This relates to one crucial ethical issue concerning poverty measurement, namely whether it should be expressed, for instance, through income, basic needs, primary goods, or capabilities. Once more, retracing the history of these criteria emphasizes several paradoxes. Capabilities as being open, plural, and context-dependent might solve some of these paradoxes. However, they have been subjected to significant criticism for their difficult operationalization. Indeed, they should be identified through public discussion, but how to implement such a public discussion is unclear. This would require precisely defining both the topic and the frame of the discussion. Participatory approaches as developed by Sabina Alkire could be employed to define the frame.  Mortality data, which reveals privations, could act as a proxy for capabilities and could be the topic of this public discussion.



Natalie GOLD, London School of Economics

Advances in behavioural public policy: new frameworks and old debates

Behavioural public policy as a field has its roots in the application of behavioural economics to the design of public policy. Behavioural economics studies deviations from the standard rational actor model, showing how factors that are not considered significant by standard economics can in fact influence decisions. Some of these deviations seem quite minimal. When applied to policy, this implies that small changes to the choice environment, or ‘nudges’, which would not be considered significant according to standard economics, can influence behaviour in a way that helps people better advance their own ends. Thaler and Sunstein argue that these changes do not prevent people exercising their freedom of choice, therefore they can be thought of as ‘Libertarian Paternalism’. However, increasingly, practitioners think of behavioural public policy as applying the behavioural sciences (broadly construed) to any policy problem involving behaviour change. A contrasting narrative understands behavioural economics as the application of psychology to areas that are traditionally studied by economics. This de-emphasizes the idea of deviations from rationality, and even sees the behavioural approach as a corrective to the overly individualistic standard model. I explore what this view of behavioural public policy implies for what is distinctive about the field and how it behavioural public policy can be justified.



Remco HEESEN, London School of Economics

Modelling and evaluating the credit economy in science

The replication crisis has brought renewed attention to the incentives that channel scientists’ choices. At least for academic scientists, these incentives can be captured using the concept of the credit economy. Credit represents the reputational currency that scientists build up as their peers recognize the value of their contributions to knowledge. Since credit is the primary driver of academic careers, there is a strong incentive to pursue credit. Here I present a rational choice model of credit for scientific productivity that I have used in previous and ongoing work to analyse decision problems scientists face. These decision problems include whether to share intermediate results, how much effort to commit to a particular project before going public, whether to take on riskier or more predictable projects, and more. The model provides insight on what rational credit-seeking scientists will do when facing such decision problems. Evaluating the results, we can draw conclusions about whether tweaks to the credit economy, or even a more wholesale overhaul of the reward structure of academia, might create a more desirable incentive structure. I will discuss the basic building blocks of the model as well as a few illustrative applications.



Magdalena MALECKA, University of Aarhus

Values in economic research: perspectives from philosophy of science

The methodological and philosophical discussions on the role of values or value judgments in economics have a long history. Economic methodologists and philosophers of economics would benefit from paying more attention to how philosophers of science analyse values in science. I will show that insights from the literature on values in science in philosophy of science change the analytical angle and approach to the topic and I elaborate on why they may advance the ongoing debates about influence that the so-called epistemic and non-epistemic values have on economic research. Finally, my goal is also to reflect on the philosophical stakes of analysing value-ladenness of economics. 



Mary S. MORGAN, London School of Economics

What Travels In, or With a Model?

How should we think of models that travel between disciplines?  Do they embed lots of ideas and concepts from their originating science, or are they rather thin and simple objects (‘templates’) that appear context-free. It is easy to suppose that thick models travel with ‘baggage’ which might be incompatible with their new home, while thin models travel ‘lite’, easily adapted for different purposes in new homes?  But this judgement is too neat. Thinness may hide that the new home may not be a compatible context, that thinness hides rather than reveals the model’s potential to address the questions in its new home, and that the specific model format/language may be inappropriate for the new problem. Fatness may provide positive as well as negative analogical features, which may be problematic, or may be the source of creativity. The problems of transfer of both fat and thin models may be overcome by being explicit about the issues of adapting a model to its new home: Does the scientist rethink their own problem and its context to fit the incoming model, or adapt the model to fit into its new home?  This is the basic design question for modellers importing a model from another field.



Raffaello SERI, Insubria University

Equifinality and model selection in econometrics

As any empirical economist knows, the same data are often compatible with several statistical models, thus providing an example of equifinality. This brings to the fore the selection among competing models. This can be achieved in several ways, e.g. through statistical tests. The main problem associated with this line of inquiry arises when testing leads to inconclusive and often conflicting results about the best model in a group. Several selection strategies have been advanced (in particular the general-to-specific approach), but they are not always easy to implement. The solution in this case is often to use information criteria, i.e. measures of goodness-of-fit penalized by a measure of the (absence of) parsimony of the models involved.
We provide a new treatment of model selection through information criteria. First, we show that this method can be generalized replacing parsimony with other measures of conformity with an idealized situation, e.g. the presence of a certain causality structure, etc. Second, we embed model selection in the theoretical framework offered by decision theory. In particular, we introduce a relation of lexicographic preference among competing models in the limit of an infinite number of observations: selection is performed, first, according to goodness-of-fit and, then, in the case of a tie, through the second measure of conformity. Third, we study under which conditions selection through an information criterion is compatible with this relation in finite samples. Fourth, we show that pairwise comparison of models and penalization of goodness-of-fit measures arise naturally from preferences defined on the collection of statistical models under scrutiny.