### Seminars

# Paris Econometrics Seminar

This bimonthly seminar is co-organized by CREST, the Department of Economics at Sciences Po, and the Paris School of Economics. It features research on theoretical and applied econometrics.

The seminar takes place on Mondays from 16h15 to 17h30, unless otherwise noted. It is held either physically, at one of the three institutions, or virtually via Zoom. In order to receive the seminar announcements (that include the Zoom link if applicable) please click on the “Subscribe to this mailing list” button below.

- Scientific contact: Clément de Chaisemartin (Sciences Po), Elia Lapenta (CREST), and Philipp Ketz (PSE)
- Logistic contact: Sarah Dafer (sarah.dafer at psemail.eu)

This seminar is co-funded by a French government subsidy managed by the Agence Nationale de la Recherche under the framework of the Investissements d’avenir programme reference ANR-17-EURE-0001.

## Upcoming events

**Monday 17 June 2024 16:15-17:30**- Sciences Po, room H405
**RAMBACHAN Ashesh**(MIT) :__From Predictive Algorithms to Automatic Generation of Anomalies__**Co-author: Sendhil Mullainathan**- AbstractMachine learning algorithms can find predictive signals that researchers fail to notice; yet they are notoriously hard-to-interpret. How can we extract theoretical insights from these black boxes? History provides a clue. Facing a similar problem -- how to extract theoretical insights from their intuitions -- researchers often turned to ``anomalies:'' constructed examples that highlight flaws in an existing theory and spur the development of new ones. Canonical examples include the Allais paradox and the Kahneman-Tversky choice experiments for expected utility theory. We suggest anomalies can extract theoretical insights from black box predictive algorithms. We develop procedures to automatically generate anomalies for an existing theory when given a predictive algorithm. We cast anomaly generation as an adversarial game between a theory and a falsifier, the solutions to which are anomalies: instances where the black box algorithm predicts - were we to collect data - we would likely observe violations of the theory. As an illustration, we generate anomalies for expected utility theory using a large, publicly available dataset on real lottery choices. Based on an estimated neural network that predicts lottery choices, our procedures recover known anomalies and discover new ones for expected utility theory. In incentivized experiments, subjects violate expected utility theory on these algorithmically generated anomalies; moreover, the violation rates are similar to observed rates for the Allais paradox and Common ratio effect.
- Full text [pdf]

## Archives

**Monday 3 June 2024 16:15-17:30**- PSE, room R1-14
**HE Junnan**(Sciences Po) :__Diversified Production and Market Power: Theory and Evidence from Renewables__**Co-authors: Michele Fioretti and Jorge Tamayo**- AbstractWe show that market power can either exacerbate or mitigate fluctuations in energy prices due to renewable energy availability and provide empirical evidence from the Colombian energy sector, where hydropower generation is prevalent and energy suppliers have a diversified technology portfolio. Incentives to crowding-out rivals make a supplier produce more during a drought if it has access to other technologies with capacities unaffected by the drought. During abundance, instead, access to other technologies reduces a firm’s supply compared to a scenario where these technologies are owned by its rivals, as rivals cannot crowd-out the firm experiencing abundance as easily. Jointly, these two effects create a U-shaped relationship between market concentration and prices when firms have diversified production technologies, which applies more broadly to other industries. Initially, transferring high-cost capacity to a large firm with the most efficient technology lowers prices, but eventually leads to unilateral price increases.
- Full text [pdf]

**Tuesday 28 May 2024 14:45-16:00**- Sciences Po, room H405
**MOLINARI Francesca**(Cornell University) :__Inference for an Algorithmic Fariness-Accuracy Frontier__**Co-author: Yiqi Liu**- AbstractDecision-making processes increasingly rely on the use of algorithms. Yet, algorithms' predictive ability frequently exhibit systematic variation across subgroups of the population. While both fairness and accuracy are desirable properties of an algorithm, they often come at the cost of one another. What should a fairness-minded policymaker do then, when confronted with finite data? In this paper, we provide a consistent estimator for a theoretical fairness-accuracy frontier put forward by Liang, Lu and Mu (2023) and propose inference methods to test hypotheses that have received much attention in the fairness literature, such as (i) whether fully excluding a covariate from use in training the algorithm is optimal and (ii) whether there are less discriminatory alternatives to an existing algorithm. We also provide an estimator for the distance between a given algorithm and the fairest point on the frontier, and characterize its asymptotic distribution. We leverage the fact that the fairness-accuracy frontier is part of the boundary of a convex set that can be fully represented by its support function. We show that the estimated support function converges to a tight Gaussian process as the sample size increases, and then express policy-relevant hypotheses as restrictions on the support function to construct valid test statistics.
- Full text [pdf]

**Monday 13 May 2024 16:15-17:30**- CREST, room 3001
**SOKULLU Senay**(University of Bristol) :__Identification and Estimation of Demand Models with Endogenous Product Entry and Exit__**Co-authors: Victor Aguirregabiria and Alessandro Iaria**- AbstractThis paper deals with the endogeneity of firms’ entry and exit decisions in demand estimation. Product entry decisions lack a single crossing property in terms of demand unobservables, which causes the inconsistency of conventional methods dealing with selection. We present a novel and straightforward two-step approach to estimate demand while addressing endogenous product entry. In the first step, our method estimates a finite mixture model of product entry accommodating latent market types. In the second step, it estimates demand controlling for the propensity scores of all latent market types. We apply this approach to data from the airline industry.
- Full text [pdf]

**Monday 29 April 2024 16:15-17:30**- PSE, room R1-14
**GU Jiaying**(University of Toronto) :__Counterfactual Identification and Latent Space Enumeration in Discrete Outcome Models__**Co-authors: Thomas Russell and Thomas Stringham**- AbstractThis paper provides a unified framework for partial identification of counterfactual parameters in a general class of discrete outcome models allowing for endogenous regressors and multidimensional latent variables, all without parametric distributional assumptions. Our main theoretical result is that, when the covariates are discrete, the infinite-dimensional latent variable distribution can be replaced with a finitedimensional version that is equivalent from an identification perspective. The finite-dimensional latent variable distribution is constructed in practice by enumerating regions of the latent variable space with a new and efficient cell enumeration algorithm for hyperplane arrangements. We then show that bounds on a certain class of counterfactual parameters can be computed by solving a sequence of linear programming problems, and show how the researcher can introduce additional assumptions as constraints in the linear programs. Finally, we apply the method to a mobile phone choice example with heterogeneous choice sets, as well as an airline entry game example.
- Full text [pdf]

**Monday 18 March 2024 16:15-17:30**- CREST
**WINDMEIJER Frank**(University of Oxford) :__The Falsification Adaptive Set in Linear Models with Instrumental Variables that Violate the Exclusion or Conditional Exogeneity Restriction__- AbstractMasten and Poirier (2021) introduced the falsification adaptive set (FAS) in linear models with a single endogenous variable estimated with multiple instrumental variables (IVs). The FAS reflects the model uncertainty that arises from falsification of the baseline model. We show that it applies to cases where a conditional exogeneity assumption holds and invalid instruments violate the exclusion assumption only. We propose a generalized FAS that reflects the model uncertainty when some instruments violate the exclusion assumption and/or some instruments violate the conditional exogeneity assumption. If there is at least one relevant instrument that satisfies both the exclusion and conditional exogeneity assumptions then this generalized FAS is guaranteed to contain the population parameter of interest in large samples.

**Monday 11 March 2024 16:15-17:30**- Sciences Po, room H405
**MENZEL Konrad**(NYU) :__Transfer Estimates for Causal Effects across Heterogeneous Sites__- AbstractWe consider the problem of extrapolating treatment effects across heterogeneous populations (sites/contexts). We consider an idealized scenario in which the researcher observes cross-sectional data for a large number of units across several experimental sites in which an intervention has already been implemented to a new target site for which a baseline survey of unit-specific, pre-treatment outcomes and relevant attributes is available. We propose a transfer estimator that exploits cross-sectional variation between individuals and sites to predict treatment outcomes using baseline outcome data for the target location. We consider the problem of determining the optimal finite-dimensional feature space in which to solve that prediction problem. Our approach is design-based in the sense that the performance of the predictor is evaluated given the specific, finite selection of experimental and target sites. Our approach is nonparametric, and our formal results concern the construction of an optimal basis of predictors as well as convergence rates for the estimated conditional average treatment effect relative to the constrained-optimal population predictor for the target site. We illustrate our approach using a combined data set of five multi-site randomized controlled trials (RCTs) to evaluate the effect of conditional cash transfers on school attendance.
- Full text [pdf]

**Monday 26 February 2024 16:15-17:30**- PSE, room R1-15
**DHAENE Geert**(KU Leuven) :__Iterated corrections for incidental parameter bias__**Co-authors: Koen Jochmans and Martin Weidner**- AbstractIn many panel data models, fixed effects typically lead to an incidental parameter problem: maximum likelihood estimation of the model’s common parameters (e.g., common slope or variance parameters) is often biased/inconsistent as the number of cross-sectional units grows large while the number of time periods is fixed (Neyman and Scott 1948). I will discuss two methods to correct for incidental parameter bias. Their power lies in the fact that they can be iterated. (1) The first method, called approximate functional differencing, has a Bayesian flavor. It uses the posterior predictive density to construct a q-th order bias corrected score function, starting from an initially chosen (biased) score function. In the limit as q goes to infinity, the method is equivalent to Bonhomme’s (2012) method of functional differencing in point-identified models. When point identification fails, the limit remains well defined and yields estimates with very small bias. (2) The second method is entirely frequentist. Starting from the (biased) profile score function, it constructs a bias corrected score function by calculating the bias as a function of the incidental parameters and using maximum likelihood estimates thereof as plug-in estimates, and it iterates these steps. In several models, it is found that the first-order bias corrected profile score function is already exactly unbiased, hence resolving the incidental parameter problem. In other models, the infinitely iterated bias correction leads to estimates with very small bias. (This presentation is based on joint work with Martin Weidner and older work with Koen Jochmans.)
- Full text [pdf]

**Monday 11 December 2023 16:15-17:30**- PSE, room R1-14
**POULIOT Guillaume**(University of Chicago) :__An Exact t-Test__- AbstractMultivariate linear regression and randomization-based inference are two essential methods in statistics and econometrics. Nevertheless, the problem of producing a randomized test for the value of a single regression coefficient that is exactly valid when errors are exchangeable, and which is asymptotically valid for the best linear predictor, has remained elusive. In this paper, we produce a test that is exactly valid with exchangeable errors and which allows for general covariate designs; covariates may be continuous as well as discrete, and may be correlated. The test is asymptotically valid when the errors are not exchangeable, in particular in the presence of conditional heteroskedasticity.

**Monday 4 December 2023 16:15-17:30**- ZOOM
**FORNERON Jean-Jacques**(Boston University) :__Occasionally Misspecified__- AbstractWhen fitting a particular Economic model on a sample of data, the model may turn out to be heavily misspecified for some observations. This can happen because of unmodelled idiosyncratic events, such as an abrupt but short-lived change in policy. These outliers can significantly alter estimates and inferences. A robust estimation is desirable to limit their influence. For skewed data, this induces another bias which can also invalidate the estimation and inferences. This paper proposes a robust GMM estimator with a simple bias correction that does not degrade robustness significantly. The paper provides finite-sample robustness bounds, and asymptotic uniform equivalence with an oracle that discards all outliers. Consistency and asymptotic normality ensue from that result. An application to the “Price-Puzzle,” which finds inflation increases when monetary policy tightens, illustrates the concerns and the method. The proposed estimator finds the intuitive result: tighter monetary policy leads to a decline in inflation.
- Full text [pdf]

**Monday 13 November 2023 16:15-17:30**- ZOOM
**ZHU Yinchu**(Brandeis University) :__New possibilities in identification of binary choice models with fixed effects__- AbstractWe study the identification of binary choice models with fixed effects. We provide a condition called sign saturation and show that this condition is sufficient for the identification of the model. In particular, we can guarantee identification even with bounded regressors. We also show that without this condition, the model is not identified unless the error distribution belongs to a small class. The same sign saturation condition is also essential for identifying the sign of treatment effects. A test is provided to check the sign saturation condition and can be implemented using existing algorithms for the maximum score estimator.
- Full text [pdf]

**Monday 16 October 2023 18:00-19:15**- ZOOM
**CHETVERIKOV Denis**(UCLA) :__Spectral and post-spectral estimators for grouped panel data models__**Co-author: Elena Manresa**- AbstractIn this paper, we develop spectral and post-spectral estimators for grouped panel data models. Both estimators are consistent in the asymptotics where the number of observations N and the number of time periods T simultaneously grow large. In addition, the post-spectral estimator is sqrt(NT)-consistent and asymptotically normal with mean zero under the assumption of well-separated groups even if T is growing much slower than N. The post-spectral estimator has, therefore, theoretical properties that are comparable to those of the grouped fixed-effect estimator developed by Bonhomme and Manresa (2015). In contrast to the grouped fixed-effect estimator, however, our post-spectral estimator is computationally straightforward.
- Full text [pdf]

**Monday 9 October 2023 16:15-17:30**- CREST, room 3001
**KWON Soonwoo**(Brown University) :__Testing Mechanisms__**Co-author: Jonathan Roth**- AbstractEconomists are often interested in the mechanisms by which a particular treatment affects an outcome. This paper develops tests for the "sharp null of full mediation" that the treatment D operates on the outcome Y only through a particular conjectured mechanism (or sets of mechanisms) M. A key observation is that if D is randomly assigned and has a monotone effect on M, then D is a valid instrumental variable for the LATE of M on Y. Existing tools for testing the validity of the LATE assumptions can thus be used to test the sharp null of full mediation when M and D are binary. We extend these results to settings where M is multi-valued or multi-dimensional. We further provide methods for lower-bounding the size of the alternative mechanisms when the sharp null is rejected. An advantage of our approach relative to existing tools for mediation analysis is that it does not require stringent assumptions about how M is assigned.

**Monday 18 September 2023 16:15-17:30**- Sciences Po, room H405
**TAMER Elie**(Harvard University) :__Parallel Trends and Dynamic Choices__**Co-authors: Philip Marx and Xun Tang**- AbstractDifference-in-differences is a common method for estimating treatment effects, and the parallel trends condition is its main identifying assumption: the trend in mean untreated outcomes is independent of the observed treatment status. In observational settings, treatment is often a dynamic choice made or influenced by rational actors, such as policy-makers, firms, or individual agents. This paper relates parallel trends to economic models of dynamic choice. We clarify the implications of parallel trends on agent behavior and study when dynamic selection motives lead to violations of parallel trends. Finally, we consider identification under alternative assumptions that accommodate features of dynamic choice.
- Full text [pdf]

**Monday 11 September 2023 16:15-17:30**- PSE, Campus Jourdan, R1-14
**TETENOV Aleksey**(University of Geneva) :__Constrained Classification and Policy Learning__**Co-authors: Toru Kitagawa and Shosei Sakaguchi**- AbstractModern machine learning approaches to classification, including AdaBoost, support vector machines, and deep neural networks, utilize surrogate loss techniques to circumvent the computational complexity of minimizing empirical classification risk. These techniques are also useful for causal policy learning problems, since estimation of individualized treatment rules can be cast as a weighted (cost-sensitive) classification problem. Consistency of the surrogate loss approaches studied in Zhang (2004) and Bartlett et al. (2006) relies on the assumption of correct specification, which means that the specified set of classifiers is rich enough to contain a first-best classifier. This assumption is, however, less credible when the set of classifiers is constrained by interpretability or fairness, leaving the applicability of surrogate loss-based algorithms unknown in such second-best scenarios. This paper studies consistency of surrogate loss procedures under a constrained set of classifiers without assuming correct specification. We show that in settings where the constraint restricts the classifier’s prediction set only, hinge losses (i.e., l1-support vector machines) are the only surrogate losses that preserve consistency in second-best scenarios. If the constraint additionally restricts the functional form of the classifier, consistency of a surrogate loss approach is not guaranteed, even with hinge loss. We therefore characterize conditions on the constrained set of classifiers that can guarantee consistency of hinge risk minimizing classifiers. Exploiting our theoretical results, we develop robust and computationally attractive hinge loss-based procedures for a monotone classification problem.
- Full text [pdf]

**Monday 19 June 2023 09:30-12:45****BONHOMME Stéphane**(University of Chicago) :__Mini Workshop__- Abstract9h30 - 10h15: Stéphane Bonhomme (University of Chicago): Estimating Individual Responses when Tomorrow Matters 10h15 - 11h: Koen Jochmans (Toulouse School of Economics): Bootstrap inference for fixed-effect models 11h - 11h15: Break 11h15 - 12h: Anna Simoni (CREST, ENSAE, Ecole Polytechnique): Bayesian Bi-level Sparse Group Regressions for Macroeconomic Forecasting 12h - 12h45: Frank Windmeijer (University of Oxford): The Robust F-Statistic as a Test for Weak Instruments

**Monday 12 June 2023 16:00-17:15**- Zoom seminar
**MASTEN Matt**(Duke University) :__Assessing Omitted Variable Bias when the Controls are Endogenous__**Co-authors: Paul Diegert and Alexandre Poirier**- AbstractOmitted variables are one of the most important threats to the identification of causal effects. Several widely used approaches, including Oster (2019), assess the impact of omitted variables on empirical conclusions by comparing measures of selection on observables with measures of selection on unobservables. These approaches either (1) assume the omitted variables are uncorrelated with the included controls, an assumption that is often considered strong and implausible, or (2) use a method called residualization to avoid this assumption. In our first contribution, we develop a framework for objectively comparing sensitivity parameters. We use this framework to formally prove that the residualization method generally leads to incorrect conclusions about robustness. In our second contribution, we then provide a new approach to sensitivity analysis that avoids this critique, allows the omitted variables to be correlated with the included controls, and lets researchers calibrate sensitivity parameters by comparing the magnitude of selection on observables with the magnitude of selection on unobservables as in previous methods. We illustrate our results in an empirical study of the effect of historical American frontier life on modern cultural beliefs. Finally, we implement these methods in the companion Stata module regsensitivity for easy use in practice.
- Full text [pdf]

**Monday 22 May 2023 16:00-17:15**- PSE, 48 boulevard Jourdan, R1-15
**WÜTHRICH Kaspar**(UC San Diego) :__(When) should you adjust inferences for multiple hypothesis testing?__**Co-authors: Davide Viviano and Paul Niehaus**- AbstractMultiple hypothesis testing practices vary widely, without consensus on which are appropriate when. We provide an economic foundation for these practices. In studies of multiple interventions or sub-populations, adjustments may be appropriate depending on scale economies in the research production function, with control of classical notions of compound errors emerging in some but not all cases. Studies with multiple outcomes motivate testing using a single index, or adjusted tests of several indices when the intended audience is heterogeneous. Data on actual research costs in two applications suggest both that some adjustment is warranted and that standard procedures are overly conservative.
- Full text [pdf]

**Monday 15 May 2023 16:00-17:15**- CREST, room 3001
**NOACK Claudia**(Oxford) :__Flexible Covariate Adjustments in Regression Discontinuity Designs__- AbstractEmpirical regression discontinuity (RD) studies often use covariates to increase the precision of their estimates. In this paper, we propose a novel class of estimators that use such covariate information more efficiently than the linear adjustment estimators that are currently used widely in practice. Our approach can accommodate a possibly large number of either discrete or continuous covariates. It involves running a standard RD analysis with an appropriately modified outcome variable, which takes the form of the difference between the original outcome and a function of the covariates. We characterize the function that leads to the estimator with the smallest asymptotic variance, and show how it can be estimated via modern machine learning, nonparametric regression, or classical parametric methods. The resulting estimator is easy to implement, as tuning parameters can be chosen as in a conventional RD analysis. An extensive simulation study illustrates the performance of our approach.
- Full text [pdf]

**Monday 17 April 2023 17:30-18:45****URA Takuya**(UC Davis) :__Slow Movers in Panel Data__**Co-author: Yuya Sasaki**- AbstractPanel data often contain stayers (units with no within-variations) and slow movers (units with little within-variations). In the presence of many slow movers, conventional econometric methods can fail to work. We propose a novel method of robust inference for the average partial effects in correlated random coefficient models robustly across various distributions of within-variations, including the cases with many stayers and/or many slow movers in a unified manner. In addition to this robustness property, our proposed method entails smaller biases and hence improves accuracy in inference compared to existing alternatives. Simulation studies demonstrate our theoretical claims about these properties: the conventional 95% confidence interval covers the true parameter value with 37-93% frequencies, whereas our proposed one achieves 93-96% coverage frequencies.
- Full text [pdf]

**Monday 27 March 2023 16:00-17:15**- PSE, Campus Jourdan, Salle R1-15
**DE CHAISEMARTIN Clément**(Sciences Po) :__More Robust Estimators for Panel Bartik Designs, With An Application to the Effect of Chinese Imports on US Employment.__**Ziteng Lei**- AbstractWe show that panel Bartik regressions identify non-convex combinations of location-and-period-specific treatment effects. Thus, those regressions could be biased in the presence of heterogeneous effects. We propose two alternative correlated-random-coefficient (CRC) estimators, that are more robust to heterogeneous effects. We revisit Autor et al. (2013), who use a panel Bartik regression to estimate the effect of imports from China on US employment. Their regression estimates a highly non-convex combination of effects, and our CRC estimators are small and insignificant: without assuming constant effects, one cannot conclude that imports from China had a significantly negative effect on US employment.
- Full text [pdf]

**Monday 13 March 2023****CHETVERIKOV Denis**(UCLA) :__Seminar postponed.__

**Monday 13 February 2023 16:00-17:00****BEYHUM Jad**(ENSAI) :__Nowcasting GDP with factor-augmented high-dimensional MIDAS regression__**Co-author: J. Striaukas**- AbstractThis paper introduces a factor-augmented high-dimensional mixed-frequency regression model to nowcast GDP growth. This new approach goes beyond classical sparse regression and factor models, by combining them. We derive rates of convergence of our estimator. The new technique is applied to nowcast GDP. We find that it significantly improves over a range of more classical nowcasting methods based on either sparse regression or factor models.

**Monday 5 December 2022 16:00-17:15**- on Zoom
**KETZ Philipp**(PSE - CNRS) :__Allowing for weak identification when testing GARCH-X type models__- AbstractIn this paper, we use the results in Andrews and Cheng (2012), extended to allow for parameters to be near or at the boundary of the parameter space, to derive the asymptotic distributions of the two test statistics that are used in the two-step (testing) procedure proposed by Pedersen and Rahbek (2019). The latter aims at testing the null hypothesis that a GARCH-X type model, with exogenous covariates (X), reduces to a standard GARCH type model, while allowing the "GARCH parameter" to be unidentified. We then provide a characterization result for the asymptotic size of any test for testing this null hypothesis before numerically establishing a lower bound on the asymptotic size of the two-step procedure at the 5% nominal level. This lower bound exceeds the nominal level, revealing that the two-step procedure does not control asymptotic size. In a simulation study, we show that this finding is relevant for finite samples, in that the two-step procedure can suffer from overrejection in finite samples. We also propose a new test that, by construction, controls asymptotic size and is found to be more powerful than the two-step procedure when the "ARCH parameter" is "very small" (in which case the two-step procedure underrejects).
- Full text [pdf]

**Monday 21 November 2022 16:00-17:15**- PSE, room R1-13
**MUGNIER Martin**(CREST, ENSAE, Institut Polytechnique de Paris) :__Unobserved Clusters of Time-Varying Heterogeneity in Nonlinear Panel Data Models__- AbstractIn studies based on longitudinal data, researchers often assume time-invariant unobserved heterogeneity or linear-in-parameters conditional expectations. Violation of these assumptions may lead to poor counterfactuals. I study the identification and estimation of a large class of nonlinear grouped fixed effects (NGFE) models where the relationship between observed covariates and cross-sectional unobserved heterogeneity is left unrestricted but the latter only takes a restricted number of paths over time. I show that the corresponding clusters and the nonparametrically specified link function can be point-identified when both dimensions of the panel are large. I propose a semiparametric NGFE estimator whose implementation is feasible, and establish its large sample properties in popular binary and count outcome models. Distinctive features of the NGFE estimator are that it is asymptotically normal unbiased at parametric rates, and it allows for the number of periods to grow slowly with the number of cross-sectional units. Monte Carlo simulations suggest good finite sample performance. I apply this new method to revisit the so-called inverted-U relationship between product market competition and innovation. Allowing for clustered patterns of time-varying unobserved heterogeneity leads to a much flatter estimated curve.
- Full text [pdf]

**Monday 7 November 2022 16:00-17:15**- PSE, room R2-20
**SPINI Pietro**(Bristol) :__Robustness, Heterogeneous Treatment Effects and Covariate Shifts__- AbstractThis paper studies the robustness of estimated policy effects to changes in the distribution of covariates. Robustness to covariate shifts is important, for example, when evaluating the external validity of (quasi)-experimental results, which are often used as a benchmark for evidence-based policy-making. I propose a novel scalar robustness metric. This metric measures the magnitude of the smallest covariate shift needed to invalidate a claim on the policy effect (for example, ATE >=0) supported by the (quasi)-experimental evidence. My metric links the heterogeneity of policy effects and robustness in a flexible, nonparametric way and does not require functional form assumptions. I cast the estimation of the robustness metric as a de-biased GMM problem. This approach guarantees a parametric convergence rate for the robustness metric while allowing for machine learning-based estimators of policy effect heterogeneity (for example, lasso, random forest, boosting, neural nets). I apply my procedure to the Oregon Health Insurance experiment. I study the robustness of policy effects estimates of health-care utilization and financial strain outcomes, relative to a shift in the distribution of context-specific covariates. Such covariates are likely to differ across US states, making quantification of robustness an important exercise for adoption of the insurance policy in states other than Oregon. I find that the effect on outpatient visits is the most robust among the metrics of health-care utilization considered.
- Full text [pdf]

**Monday 10 October 2022 16:00-17:15**- PSE, room R1-13
**SUN Liyang**(CEMFI) :__Empirical Welfare Maximization with Constraints__- AbstractWhen designing eligibility criteria for welfare programs, policymakers naturally want to target the individuals who will benefit the most. This paper proposes two new econometric approaches to selecting an optimal eligibility criterion when individuals’ costs to the program are unknown and need to be estimated. One is designed to achieve the highest benefit possible while satisfying a budget constraint with high probability. The other is designed to optimally trade off the benefit and the cost from violating the budget constraint. The setting I consider extends the previous literature on Empirical Welfare Maximization by allowing for uncertainty in estimating the budget needed to implement the criterion, in addition to its benefit. Consequently, my approaches improve the existing approach as they can be applied to settings with imperfect take-up or varying program needs. I illustrate my approaches empirically by deriving an optimal budget-constrained Medicaid expansion in the US.
- Full text [pdf]

**Monday 26 September 2022 16:00-17:15**- CREST, room 3001
**HELM Ines**(LMU Munich) :__Inference for Ranks__**Co-authors: Sergei Bazylik, Magne Mogstad, Joseph P. Romano, and Azeem M. Shaikh**- AbstractThis talk is based on two papers: https://www.ucl.ac.uk/~uctpdwi/papers/cwp0422.pdf and https://www.ucl.ac.uk/~uctpdwi/papers/cwp4021.pdf.

**Monday 12 September 2022 16:00-17:15****SEMENOVA Vira**(Berkeley) :__Automated Inference on Sharp Bounds__- AbstractMany causal parameters involving the joint distribution of potential outcomes in treated and control states cannot be point-identified, but only be bounded from above and below. The bounds can be further tightened by conditioning on pre-treatment covariates, and the sharp version of the bounds corresponds to using a full covariate vector. This paper gives a method for estimation and inference on sharp bounds determined by a linear system of under-identified equalities (e.g., as in Heckman et al (ReSTUD, 1997)). In the sharp bounds’ case, the RHS of this system involves a nuisance function of (many) covariates (e.g., the conditional probability of employment in treated or control state). Combining Neyman-orthogonality and sample splitting, I provide an asymptotically Gaussian estimator of sharp bound that does not require solving the linear system in closed form. I demonstrate the method in an empirical application to Connecticut’s Jobs First welfare reform experiment.

**Monday 27 June 2022 16:00-17:15****YOUNG Alwyn**(LSE) :__Consistency without Inference: Instrumental Variables in Practical Application__- AbstractI use Monte Carlo simulations, the jackknife and multiple forms of the bootstrap to study a comprehensive sample of 1309 instrumental variables regressions in 30 papers published in the journals of the American Economic Association. Monte Carlo simulations based upon published regressions show that non-iid error processes in highly leveraged regressions, both prominent features of published work, adversely affect the size and power of IV tests, while increasing the bias and mean squared error of IV relative to OLS. Weak instrument pre-tests based upon F statistics are found to be largely uninformative of both size and bias. In published papers IV has little power as, despite producing substantively different estimates, it rarely rejects the OLS point estimate or the null that OLS is unbiased, while the statistical significance of excluded instruments is exaggerated.
- Full text [pdf]

**Monday 20 June 2022 16:00-17:15**- CREST, 5 Av. Le Chatelier, 91120 Palaiseau
**MAUREL Arnaud**(Duke University) :__Heterogeneity, Uncertainty and Learning: A Semiparametric Identification Analysis__**Co-authors: J. Bunting and P. Diegert**- AbstractIn this paper, we provide new semiparametric identification results for a general class of learning model in which outcomes of interest depend on i) predictable heterogeneity, ii) initially unpredictable heterogeneity that may be revealed over time, and iii) transitory uncertainty. We consider a common environment where the researcher only has access to longitudinal data on choices and outcomes. We establish point-identification of the outcome equation parameters and the distribution of the three types of unobservables, under the standard assumption that unpredictable heterogeneity and uncertainty are normally distributed. We also show that a pure learning model remains identified without making any distributional assumption. We then derive a sieve MLE estimator for the model parameters, which is shown to exhibit good finite-sample performances and is very tractable in practice.

**Monday 13 June 2022 16:00-17:15****YOUNG Alwyn**(LSE) :__This talk has been cancelled and will be rescheduled.__- Full text [pdf]

**Monday 30 May 2022 16:00-17:15****STOULI Sami**(Bristol) :__Gaussian transforms modeling and the estimation of distributional regression functions__**Co-author: Richard Spady**- AbstractWe propose flexible Gaussian representations for conditional cumulative distribution functions and give a concave likelihood criterion for their estimation. Optimal representations satisfy the monotonicity property of conditional cumulative distribution functions, including in finite samples and under general misspecification. We use these representations to provide a unified framework for the flexible Maximum Likelihood estimation of conditional density, cumulative distribution, and quantile functions at parametric rate. Our formulation yields substantial simplifications and finite sample improvements over related methods. An empirical application to the gender wage gap in the United States illustrates our framework.
- Full text [pdf]

**Monday 16 May 2022 16:00-17:15**- R1-14
**MCCLOSKEY Adam**(University of Colorado, Boulder ) :__Short and Simple Confidence Intervals when the Directions of Some Effects are Known__**Co-author: Philipp Ketz**- AbstractWe introduce adaptive confidence intervals on a parameter of interest in the presence of nuisance parameters, such as coefficients on control variables, with known signs. Our confidence intervals are trivial to compute and can provide significant length reductions relative to standard ones when the nuisance parameters are small. At the same time, they entail minimal length increases at any parameter values. We apply our confidence intervals to the linear regression model, prove their uniform validity and illustrate their length properties in an empirical application to a factorial design field experiment and a Monte Carlo study calibrated to the empirical application.
- Full text [pdf]

**Thursday 14 April 2022 11:00-12:00**- PSE, Salle R2.21
**DE PAULA Aureo**(UCL) :__Identifying Network Ties from Panel Data: Theory and an Application to Tax Competition__**Co-authors: Imran Rasul and Pedro CL Souza**- AbstractSocial interactions determine many economic behaviors, but information on social ties does not exist in most publicly available and widely used datasets. We present results on the identification of social networks from observational panel data that contains no information on social ties between agents. In the context of a canonical social interactions model, we provide sufficient conditions under which the social interactions matrix, endogenous and exogenous social effect parameters are all globally identified. While this result is relevant across different estimation strategies, we then describe how high-dimensional estimation techniques can be used to estimate the interactions model based on the Adaptive Elastic Net GMM method. We employ the method to study tax competition across US states. We find the identified social interactions matrix implies tax competition differs markedly from the common assumption of competition between geographically neighboring states, providing further insights for the long-standing debate on the relative roles of factor mobility and yardstick competition in driving tax setting behavior across states. Most broadly, our identification and application show the analysis of social interactions can be extended to economic realms where no network data exists.
- Full text [pdf]

**Friday 8 April 2022 14:30-15:45**- CREST, Salle 3001
**MILLER Robert**(Cargnegie Mellon University) :__Search and Matching by Race and Gender__**Co-author: Rebecca Lessem**- AbstractThis project uses data from a large firm that provided information on all job applications as well as labor market outcomes within the firm over a 5 year period. Careful analysis of the data shows that African Americans and women engage in more overt job search activity within the organization than Caucasian males, attain shorter tenure on each job, and experience slower wage growth. Furthermore, we see some differences across race and gender when we look at each stage of the application process. In particular, we see that African Americans are more likely to apply for positions that they do not meet the minimal qualifications for, and both African Americans and women are more likely to withdraw from the application process. We also see that African Americans are less likely to be interviewed for a position, although we do not see any differences with race for hiring probabilities conditional on being interviewed. To explain these empirical patterns, we develop and estimate a model of two sided search and matching, in which positions become vacant when the current occupant of the job leaves, the firm begins a search process by advertising the position, and workers employed both inside and outside the organization apply for the newly vacated position. Workers choose their intensity of job search by setting a threshold above which they would accept a job offer. The applicants are culled during a hiring process that lead both parties to become more informed about the potential job match with each applicant. The successful applicant accumulates experience on-the-job. After estimating the model, we will use counterfactuals to understand more about the differences in the search and matching process across racial and gender groups, as well as how that affects wage outcomes. First, we know in the data that the durations that people stay in a job differs by race and gender. Our counterfactuals can analyze how large of a role these durations play in the hiring process. Second, we can study how outcomes would change if the hiring committee is forced to interview more or fewer candidates. This can help to understand how institutional restrictions will affect the likelihood that an individual is offered a position.

**Monday 21 March 2022 16:00-17:15****DOVONON Prosper**(Concordia University) :__Specification Testing for Conditional Moment Restrictions under Local Identification Failure__**Co-author: Nikolay Gospodinov**- AbstractIn this paper, we study the asymptotic behavior of the specification test in conditional moment restrictions model under first-order local identification failure with dependent data. More specifically, we obtain conditions under which the conventional specification test for conditional moment restrictions remains valid when first-order local identification fails but global identification is still attainable. In the process, we obtain some novel intermediate results that include extending the first- and second-order local identification framework to models defined by conditional moment restrictions, characterizing the rate of convergence of the GMM estimator and the limiting representation for degenerate U-statistics under strong mixing dependence. Simulation and empirical results illustrate the properties and the practical relevance of the proposed testing framework.
- Full text [pdf]

**Monday 7 March 2022 16:00-17:15****BEYHUM Jad**(ENSAI) :__Instrumental variable estimation of dynamic treatment effects on a survival outcome__**Co-authors: Samuele Centorrino, Jean-Pierre Florens, and Ingrid Van Keilegom**- AbstractThis paper considers identification and estimation of the causal effect of the time Z until a subject is treated on a survival outcome T. The treatment is not randomly assigned, T is randomly right censored by a random variable C and the time to treatment Z is right censored by min(T,C) The endogeneity issue is treated using an instrumental variable explaining Z and independent of the error term of the model. We study identification in a fully nonparametric framework. We show that our specification generates an integral equation, of which the regression function of interest is a solution. We provide identification conditions that rely on this identification equation. For estimation purposes, we assume that the regression function follows a parametric model. We propose an estimation procedure and give conditions under which the estimator is asymptotically normal. The estimators exhibit good finite sample properties in simulations. Our methodology is applied to find evidence supporting the efficacy of a therapy for burn-out.
- Full text [pdf]

**Monday 14 February 2022 16:00-17:15****ANANTH Abhishek**(University of Geneva) :__Optimal Treatment Assignment Rules on Networked Populations__- AbstractI study the problem of optimally distributing treatments among individuals on a network in the presence of spillovers in the effect of treatment across linked individuals. In this paper, I consider the problem of a planner who needs to distribute a limited number of preventative treatments (e.g., vaccines) for a deadly infectious disease among individuals in a target village in order to maximize population welfare. Since the planner does not know the extent of spillovers or the heterogeneity in treatment effects, she uses data coming from an experiment conducted in a separate pilot village. By placing restrictions on how others’ treatments affect one’s outcome on the contact network, I derive theoretical limits on how the data from the experiment could be used to best allocate the treatments when the planner observes the contact network structure in both the target and pilot village. For this purpose, I extend the empirical welfare maximization (EWM) procedure to derive an optimal statistical treatment rule. Under restrictions on the shape of the contact network, I provide finite sample bounds for the uniform regret (a measure of the effectiveness of a treatment rule). The main takeaway is that the uniform regret associated with EWM, extended to account for spillovers, converges to 0 at the parametric rate as the size of the pilot experiment grows. I also show that no statistical treatment rule admits a faster rate of convergence for the uniform regret, suggesting that the EWM procedure is rate-optimal.
- Full text [pdf]

**Monday 31 January 2022 16:00-17:15****HIRSHBERG David**(Emory) :__The Basis for Inference based on Synthetic Control Methods__- AbstractSynthetic Control methods are becoming popular far beyond the context of comparative case studies in which they first proposed. It is no longer the rule that they are used only when we have one (or few) treated units. But despite recent attention, there is little consensus on when they work and how to do inference based on them. That there is no one way to think about panel data makes this difficult. In some interpretations, we are solving what is essentially a matrix completion problem with noise that is completely unrelated to selection of treatment; in others, we are inverse propensity weighting to adjust for the selection of treatment based on past outcomes, noise and all. In this talk, I will discuss some results characterizing synthetic control estimation based on these two interpretations, drawing on the literature on synthetic control estimators for panel data as well as that on covariate balancing or calibrated inverse propensity weighting estimators for cross-sectional data. And I will highlight some issues that become apparent when we try to mix these perspectives, approaching inference based on selection of treatment from a perspective in which behaviors specific to individual units, i.e. fixed effects— interactive or otherwise, are needed to explain the heterogeneity of the data.

**Monday 13 December 2021 16:00-17:15****KOLESAR Michal**(Princeton) :__On Estimating Multiple Treatment Effects with Regression__**Co-authors: Paul Goldsmith-Pinkham and Peter Hull**- AbstractWe study the causal interpretation of regressions on multiple dependent treatments and flexible controls. Such regressions are often used to analyze randomized control trials with multiple intervention arms, and to estimate institutional quality (e.g. teacher value-added) with observational data. We show that, unlike with a single binary treatment, these regressions do not generally estimate convex averages of causal effects-even when the treatments are conditionally randomly assigned and the controls fully address omitted variables bias. We discuss different solutions to this issue, and propose as a solution anew class of efficient estimators of weighted average treatment effects.
- Full text [pdf]

**Monday 29 November 2021 16:00-17:15****ESCANCIANO Juan Carlos**(UC3M) :__Debiased Semiparametric U-Statistics: with an Application to Inequality of Opportunity__

**Monday 8 November 2021 17:30-18:45****SANTOS Andres**(UCLA) :__Inference for Large-Scale Linear Systems with Known Coefficients__**Co-authors: Z. Fang, A. Shaikh, and A. Torgovitsky**- AbstractThis paper considers the problem of testing whether there exists a non-negative solution to a possibly under-determined system of linear equations with known coefficients. This hypothesis testing problem arises naturally in a number of settings, including random coefficient, treatment effect, and discrete choice models, as well as a class of linear programming problems. As a first contribution, we obtain a novel geometric characterization of the null hypothesis in terms of identified parameters satisfying an infinite set of inequality restrictions. Using this characterization, we devise a test that requires solving only linear programs for its implementation, and thus remains computationally feasible in the high-dimensional applications that motivate our analysis. The asymptotic size of the proposed test is shown to equal at most the nominal level uniformly over a large class of distributions that permits the number of linear equations to grow with the sample size.
- Full text [pdf]

**Monday 18 October 2021 16:00-17:15****ROTH Jonathan**(Brown University) :__Efficient Estimation for Staggered Rollout Designs__**Co-author: Pedro Sant'Anna**- AbstractThis paper studies efficient estimation of causal effects when treatment is (quasi-) randomly rolled out to units at different points in time. We solve for the most efficient estimator in a class of estimators that nests two-way fixed effects models and other popular generalized difference-in-differences methods. A feasible plug-in version of the efficient estimator is asymptotically unbiased with efficiency (weakly) dominating that of existing approaches. We provide both t-based and permutation-test based methods for inference. We illustrate the performance of the plug-in efficient estimator in simulations and in an application to Wood et al. (2020a)'s study of the staggered rollout of a procedural justice training program for police officers. We find that confidence intervals based on the plug-in efficient estimator have good coverage and can be as much as five times shorter than confidence intervals based on existing state-of-the-art methods. As an empirical contribution of independent interest, our application provides the most precise estimates to date on the effectiveness of procedural justice training programs for police officers.
- Full text [pdf]

**Monday 4 October 2021 16:00-17:15****MOREIRA Humberto**(Fundação Getulio Vargas’ Brazilian School of Economics and Finance) :__Efficiency Loss of Asymptotically Efficient Tests in An Instrumental Variables Regression + Optimal Invariant Tests in an Instrumental Variables Regression With Heteroskedastic and Autocorrelated Errors__**Co-authors: Geert Ridder and Mahrad Sharifvaghefi**

**Monday 27 September 2021 16:00-17:00****HAZARD Yagan**(Paris School of Economics) :__Rescuing low-compliance RCTs__**Co-author: Simon Loewe**

**Monday 14 June 2021 16:00-17:15**- Online
**KOOPMAN Siem Jan**( Vrije Universiteit Amsterdam) :__Forecasting in a changing world: from the great recession to the COVID-19 pandemic__**Co-authors: Mariia Artemova, Francisco Blasques, and Zhaokun Zhang**- AbstractWe develop a new targeted maximum likelihood estimation method that provides improved forecasting for misspecified linear dynamic models. The method weighs data points in the observed sample and is useful in the presence of data generating processes featuring structural breaks, complex nonlinearities, or other time-varying properties which cannot be easily captured by model design. Additionally, the method reduces to classical maximum likelihood when the model is well specified, which results in weights which are set uniformly to one. We show how the optimal weights can be set by means of a cross-validation procedure. In a set of Monte Carlo experiments we reveal that the estimation method can significantly improve the forecasting accuracy of autoregressive models. In an empirical study concerned with forecasting the U.S. Industrial Production, we show that the forecast accuracy during the Great Recession can be significantly improved by giving greater weight to observations associated with past recessions. We further establish that the same empirical finding can be found for the 2008-2009 global financial crisis, for different macroeconomic time series, and for the COVID-19 recession in 2020.
- Full text [pdf]

**Monday 10 May 2021 16:00-17:15**- online
**ABADIE Alberto**(MIT) :__A Penalized Synthetic Control Estimator for Disaggregated Data__**Co-author: Jérémy L'Hour**- AbstractSynthetic control methods are commonly applied in empirical research to estimate the effects of treatments or interventions on aggregate outcomes. A synthetic control estimator compares the outcome of a treated unit to the outcome of a weighted average of untreated units that best resembles the characteristics of the treated unit before the intervention. When disaggregated data are available, constructing separate synthetic controls for each treated unit may help avoid interpolation biases. However, the problem of finding a synthetic control that best reproduces the characteristics of a treated unit may not have a unique solution. Multiplicity of solutions is a particularly daunting challenge when the data includes many treated and untreated units. To address this challenge, we propose a synthetic control estimator that penalizes the pairwise discrepancies between the characteristics of the treated units and the characteristics of the units that contribute to their synthetic controls. The penalization parameter trades off pairwise matching discrepancies with respect to the characteristics of each unit in the synthetic control against matching discrepancies with respect to the characteristics of the synthetic control unit as a whole. We study the properties of this estimator and propose data-driven choices of the penalization parameter.

**Monday 12 April 2021 16:00-17:15****KOCK Anders**(Aarhus University/University of Oxford) :__Consistency of p-norm based tests in high-dimensions: characterization, monotonicity, domination__**Co-author: David Preinerstorfer**- AbstractTo understand how the choice of a norm affects power properties of tests in high-dimensions, we study the consistency sets of p-norm based tests in the prototypical framework of sequence models with unrestricted parameter spaces. The consistency set of a test is here defined as the set of all arrays of alternatives the test is consistent against as the dimension of the parameter space diverges. We characterize the consistency sets of p-norm based tests and find, in particular, that the consistency against an array of alternatives can not be determined solely in terms of the p-norm of the alternative. Our characterization also reveals an unexpected monotonicity result: namely that the consistency set is strictly increasing in p \in (0,\infty), such that tests based on higher p strictly dominate those based on lower p in terms of consistency. This monotonicity allows us to construct novel tests that dominate, with respect to their consistency behavior, all p-norm based tests without sacrificing asymptotic size.
- Full text [pdf]

**Monday 8 March 2021 16:00-17:15****KASY Maximilian**(University of Oxford) :__The social impact of algorithmic decision making: Economic perspectives__**https://maxkasy.github.io/home/files/papers/adaptive_combinatorial.pdf**- Full text [pdf]

**Monday 8 February 2021 16:00-17:15**- online
**RAI Yoshiyasu**(University of Mannheim) :__Statistical Inference for Treatment Assignment Policies__- AbstractIn this paper, I study the statistical inference problem for treatment assignment policies. In typical applications, individuals with different characteristics are expected to differ in their responses to treatment. Hence, treatment assignment policies that allocate treatment based on individuals’ observed characteristics can have a significant influence on outcomes and welfare. A growing literature proposes various approaches to estimating the welfare-maximizing treatment assignment policy. This paper complements this work on estimation by developing a method of inference for treatment assignment policies that can be used to assessing the precision of estimated optimal policies. In particular, for the welfare criterion used by Kitagawa and Tetenov (2018), my method constructs (i) a confidence set for the optimal policy and (ii) a confidence interval for the maximized welfare. A simulation study indicates that the proposed methods work well with modest sample size. I apply the method to experimental data from the National Job Training Partnership Act study.

**Monday 14 December 2020 16:00-17:15****FREYBERGER Joachim**(University of Bonn) :__Normalizations and misspecification in skill formation models__- AbstractAn important class of structural models investigates the determinants of skill formation and the optimal timing of interventions. To achieve point identification of the parameters, researcher typically normalize the scale and location of the unobserved skills. This paper shows that these seemingly innocuous restrictions can severely impact the interpretation of the parameters and counterfactual predictions. For example, simply changing the units of measurements of observed variables might yield ineffective investment strategies and misleading policy recommendations. To tackle these problems, this paper provides a new identification analysis, which pools all restrictions of the model, characterizes the identified set of all parameters without normalizations, illustrates which features depend on these normalizations, and introduces a new set of important policy-relevant parameters that are identified under weak assumptions and yield robust conclusions. As a byproduct, this paper also presents a general and formal definition of when restrictions are truly normalizations.
- Full text [pdf]

**Monday 9 November 2020 16:00-17:15****RENAULT Jérôme**(TSE) :__Approximate Maximum Likelihood for Complex Structural Models__**Co-authors: D.T. Frazier and V. Czellar**- AbstractIndirect Inference (I-I) is a popular technique for estimating complex parametric models whose likelihood function is intractable, however, the statistical efficiency of I-I estimation is questionable. While the efficient method of moments, Gallant and Tauchen (1996), promises efficiency, the price to pay for this efficiency is a loss of parsimony and thereby a potential lack of robustness to model misspecification. This stands in contrast to simpler I-I estimation strategies, which are known to display less sensitivity to model misspecification precisely due to their focus on specific elements of the underlying structural model. In this research, we propose a new simulation-based approach that maintains the parsimony of I-I estimation, which is often critical in empirical applications, but can also deliver estimators that are nearly as efficient as maximum likelihood. This new approach is based on using a constrained approximation to the structural model, which ensures identification and can deliver estimators that are nearly efficient. We demonstrate this approach through several examples, and show that this approach can deliver estimators that are nearly as efficient as maximum likelihood, when feasible, but can be employed in many situations where maximum likelihood is infeasible.
- Full text [pdf]

**Monday 12 October 2020 16:00-17:15**- on line
**GUNSILIUS Florian**(University of Michigan) :__Distributional synthetic controls__- AbstractThis article extends the method of synthetic controls to probability measures. The distribution of the synthetic control group is obtained as the optimally weighted barycenter in Wasserstein space of the distributions of the control groups which minimizes the distance to the distribution of the treatment group. It can be applied to settings with disaggregated- or aggregated (functional) data. The method produces a generically unique counterfactual distribution when the data are continuously distributed. A basic representation of the barycenter provides a computationally efficient implementation via a straightforward tensor-variate regression approach. In addition, identification results are provided that also shed new light on the classical synthetic controls estimator. As an illustration, the method provides an estimate of the counterfactual distribution of household income in Colorado one year after Amendment 64.
- Full text [pdf]

**Monday 14 September 2020 16:00-17:15****KAMAT Vishal**(Toulouse School of Economics) :__Estimating the Welfare Effects of School Vouchers__**Co-author: S. Norris**- AbstractWe analyze the welfare effects of voucher provision in the DC Opportunity Scholarship Program (OSP), a school voucher program in Washington, DC, that randomly allocated vouchers to students. To do so, we develop new discrete choice tools to show how to use data with random allocation of school vouchers to characterize what we can learn about the welfare benefits of providing a voucher of a given amount, as measured by the average willingness to pay for that voucher, and these benefits net of the costs of providing that voucher. A novel feature of our tools is that they allow specifying the relationship of the demand for the various schools with respect to prices to be entirely nonparametric or to be parameterized in a flexible manner, both of which do not necessarily imply that the welfare parameters are point identified. Applying our tools to the OSP data, we find that provision of the status-quo as well as a wide range of counterfactual voucher amounts has a positive net average benefit. We find these positive results arise due to the presence of many low-tuition schools in the program, removing these schools from the program can result in a negative net average benefit.
- Full text [pdf]