Séminaires
Séminaire d’économétrie
Ce séminaire bimensuel porte sur l’économétrie théorique et appliquée. Il a lieu le lundi à 16h sur ZOOM sauf indication contraire.
- Responsables scientifiques : Xavier d’Haultfœuille (CREST) and Philipp Ketz (PSE)
- Responsable administratif : Sarah Dafer (sarah.dafer chez psemail.eu)
Ce séminaire bénéficie d’une aide de l’État gérée par l’Agence Nationale de la Recherche au titre du programme d’Investissement d’avenir portant la référence ANR-17-EURE-0001.
Prochainement
- Lundi 11 décembre 2023 16:15-17:30
- PSE, room R1-14
- POULIOT Guillaume (University of Chicago) : An Exact t-Test
- RésuméMultivariate linear regression and randomization-based inference are two essential methods in statistics and econometrics. Nevertheless, the problem of producing a randomized test for the value of a single regression coefficient that is exactly valid when errors are exchangeable, and which is asymptotically valid for the best linear predictor, has remained elusive. In this paper, we produce a test that is exactly valid with exchangeable errors and which allows for general covariate designs; covariates may be continuous as well as discrete, and may be correlated. The test is asymptotically valid when the errors are not exchangeable, in particular in the presence of conditional heteroskedasticity.
- Lundi 26 février 2024 16:15-17:30
- PSE, room R1-15
- DHAENE Geert (KU Leuven) : TBA
- Lundi 11 mars 2024 16:15-17:30
- Sciences Po, room H405
- MENZEL Konrad (NYU) : TBA
- Lundi 18 mars 2024 16:15-17:30
- CREST
- WINDMEIJER Frank (University of Oxford) : TBA
- Lundi 29 avril 2024 16:15-17:30
- PSE, room R1-14
- GU Jiaying (University of Toronto) : TBA
- Lundi 6 mai 2024 16:15-17:30
- TBD
- HE Junnan (Sciences Po) : TBA
- Lundi 13 mai 2024 16:15-17:30
- CREST
- SOKULLU Senay (University of Bristol) : TBA
- Lundi 27 mai 2024 14:45-16:00
- Sciences Po
- MOLINARI Francesca (Cornell University) : TBA
- Lundi 17 juin 2024 16:15-17:30
- Sciences Po
- RAMBACHAN Ashesh (MIT) : TBA
Archives
- Lundi 4 décembre 2023 16:15-17:30
- ZOOM
- FORNERON Jean-Jacques (Boston University) : Occasionally Misspecified
- RésuméWhen fitting a particular Economic model on a sample of data, the model may turn out to be heavily misspecified for some observations. This can happen because of unmodelled idiosyncratic events, such as an abrupt but short-lived change in policy. These outliers can significantly alter estimates and inferences. A robust estimation is desirable to limit their influence. For skewed data, this induces another bias which can also invalidate the estimation and inferences. This paper proposes a robust GMM estimator with a simple bias correction that does not degrade robustness significantly. The paper provides finite-sample robustness bounds, and asymptotic uniform equivalence with an oracle that discards all outliers. Consistency and asymptotic normality ensue from that result. An application to the “Price-Puzzle,” which finds inflation increases when monetary policy tightens, illustrates the concerns and the method. The proposed estimator finds the intuitive result: tighter monetary policy leads to a decline in inflation.
- Texte intégral [pdf]
- Lundi 13 novembre 2023 16:15-17:30
- ZOOM
- ZHU Yinchu (Brandeis University) : New possibilities in identification of binary choice models with fixed effects
- RésuméWe study the identification of binary choice models with fixed effects. We provide a condition called sign saturation and show that this condition is sufficient for the identification of the model. In particular, we can guarantee identification even with bounded regressors. We also show that without this condition, the model is not identified unless the error distribution belongs to a small class. The same sign saturation condition is also essential for identifying the sign of treatment effects. A test is provided to check the sign saturation condition and can be implemented using existing algorithms for the maximum score estimator.
- Texte intégral [pdf]
- Lundi 16 octobre 2023 18:00-19:15
- ZOOM
- CHETVERIKOV Denis (UCLA) : Spectral and post-spectral estimators for grouped panel data models
- Co-author: Elena Manresa
- RésuméIn this paper, we develop spectral and post-spectral estimators for grouped panel data models. Both estimators are consistent in the asymptotics where the number of observations N and the number of time periods T simultaneously grow large. In addition, the post-spectral estimator is sqrt(NT)-consistent and asymptotically normal with mean zero under the assumption of well-separated groups even if T is growing much slower than N. The post-spectral estimator has, therefore, theoretical properties that are comparable to those of the grouped fixed-effect estimator developed by Bonhomme and Manresa (2015). In contrast to the grouped fixed-effect estimator, however, our post-spectral estimator is computationally straightforward.
- Texte intégral [pdf]
- Lundi 9 octobre 2023 16:15-17:30
- CREST, room 3001
- KWON Soonwoo (Brown University) : Testing Mechanisms
- Co-author: Jonathan Roth
- RésuméEconomists are often interested in the mechanisms by which a particular treatment affects an outcome. This paper develops tests for the "sharp null of full mediation" that the treatment D operates on the outcome Y only through a particular conjectured mechanism (or sets of mechanisms) M. A key observation is that if D is randomly assigned and has a monotone effect on M, then D is a valid instrumental variable for the LATE of M on Y. Existing tools for testing the validity of the LATE assumptions can thus be used to test the sharp null of full mediation when M and D are binary. We extend these results to settings where M is multi-valued or multi-dimensional. We further provide methods for lower-bounding the size of the alternative mechanisms when the sharp null is rejected. An advantage of our approach relative to existing tools for mediation analysis is that it does not require stringent assumptions about how M is assigned.
- Lundi 18 septembre 2023 16:15-17:30
- Sciences Po, room H405
- TAMER Elie (Harvard University) : Parallel Trends and Dynamic Choices
- Co-authors: Philip Marx and Xun Tang
- RésuméDifference-in-differences is a common method for estimating treatment effects, and the parallel trends condition is its main identifying assumption: the trend in mean untreated outcomes is independent of the observed treatment status. In observational settings, treatment is often a dynamic choice made or influenced by rational actors, such as policy-makers, firms, or individual agents. This paper relates parallel trends to economic models of dynamic choice. We clarify the implications of parallel trends on agent behavior and study when dynamic selection motives lead to violations of parallel trends. Finally, we consider identification under alternative assumptions that accommodate features of dynamic choice.
- Texte intégral [pdf]
- Lundi 11 septembre 2023 16:15-17:30
- PSE, Campus Jourdan, R1-14
- TETENOV Aleksey (University of Geneva) : Constrained Classification and Policy Learning
- Co-authors: Toru Kitagawa and Shosei Sakaguchi
- RésuméModern machine learning approaches to classification, including AdaBoost, support vector machines, and deep neural networks, utilize surrogate loss techniques to circumvent the computational complexity of minimizing empirical classification risk. These techniques are also useful for causal policy learning problems, since estimation of individualized treatment rules can be cast as a weighted (cost-sensitive) classification problem. Consistency of the surrogate loss approaches studied in Zhang (2004) and Bartlett et al. (2006) relies on the assumption of correct specification, which means that the specified set of classifiers is rich enough to contain a first-best classifier. This assumption is, however, less credible when the set of classifiers is constrained by interpretability or fairness, leaving the applicability of surrogate loss-based algorithms unknown in such second-best scenarios. This paper studies consistency of surrogate loss procedures under a constrained set of classifiers without assuming correct specification. We show that in settings where the constraint restricts the classifier’s prediction set only, hinge losses (i.e., l1-support vector machines) are the only surrogate losses that preserve consistency in second-best scenarios. If the constraint additionally restricts the functional form of the classifier, consistency of a surrogate loss approach is not guaranteed, even with hinge loss. We therefore characterize conditions on the constrained set of classifiers that can guarantee consistency of hinge risk minimizing classifiers. Exploiting our theoretical results, we develop robust and computationally attractive hinge loss-based procedures for a monotone classification problem.
- Texte intégral [pdf]
- Lundi 19 juin 2023 09:30-12:45
- BONHOMME Stéphane (University of Chicago) : Mini Workshop
- Résumé9h30 - 10h15: Stéphane Bonhomme (University of Chicago): Estimating Individual Responses when Tomorrow Matters 10h15 - 11h: Koen Jochmans (Toulouse School of Economics): Bootstrap inference for fixed-effect models 11h - 11h15: Break 11h15 - 12h: Anna Simoni (CREST, ENSAE, Ecole Polytechnique): Bayesian Bi-level Sparse Group Regressions for Macroeconomic Forecasting 12h - 12h45: Frank Windmeijer (University of Oxford): The Robust F-Statistic as a Test for Weak Instruments
- Lundi 12 juin 2023 16:00-17:15
- Zoom seminar
- MASTEN Matt (Duke University) : Assessing Omitted Variable Bias when the Controls are Endogenous
- Co-authors: Paul Diegert and Alexandre Poirier
- RésuméOmitted variables are one of the most important threats to the identification of causal effects. Several widely used approaches, including Oster (2019), assess the impact of omitted variables on empirical conclusions by comparing measures of selection on observables with measures of selection on unobservables. These approaches either (1) assume the omitted variables are uncorrelated with the included controls, an assumption that is often considered strong and implausible, or (2) use a method called residualization to avoid this assumption. In our first contribution, we develop a framework for objectively comparing sensitivity parameters. We use this framework to formally prove that the residualization method generally leads to incorrect conclusions about robustness. In our second contribution, we then provide a new approach to sensitivity analysis that avoids this critique, allows the omitted variables to be correlated with the included controls, and lets researchers calibrate sensitivity parameters by comparing the magnitude of selection on observables with the magnitude of selection on unobservables as in previous methods. We illustrate our results in an empirical study of the effect of historical American frontier life on modern cultural beliefs. Finally, we implement these methods in the companion Stata module regsensitivity for easy use in practice.
- Texte intégral [pdf]
- Lundi 22 mai 2023 16:00-17:15
- PSE, 48 boulevard Jourdan, R1-15
- WÜTHRICH Kaspar (UC San Diego) : (When) should you adjust inferences for multiple hypothesis testing?
- Co-authors: Davide Viviano and Paul Niehaus
- RésuméMultiple hypothesis testing practices vary widely, without consensus on which are appropriate when. We provide an economic foundation for these practices. In studies of multiple interventions or sub-populations, adjustments may be appropriate depending on scale economies in the research production function, with control of classical notions of compound errors emerging in some but not all cases. Studies with multiple outcomes motivate testing using a single index, or adjusted tests of several indices when the intended audience is heterogeneous. Data on actual research costs in two applications suggest both that some adjustment is warranted and that standard procedures are overly conservative.
- Texte intégral [pdf]
- Lundi 15 mai 2023 16:00-17:15
- CREST, room 3001
- NOACK Claudia (Oxford) : Flexible Covariate Adjustments in Regression Discontinuity Designs
- RésuméEmpirical regression discontinuity (RD) studies often use covariates to increase the precision of their estimates. In this paper, we propose a novel class of estimators that use such covariate information more efficiently than the linear adjustment estimators that are currently used widely in practice. Our approach can accommodate a possibly large number of either discrete or continuous covariates. It involves running a standard RD analysis with an appropriately modified outcome variable, which takes the form of the difference between the original outcome and a function of the covariates. We characterize the function that leads to the estimator with the smallest asymptotic variance, and show how it can be estimated via modern machine learning, nonparametric regression, or classical parametric methods. The resulting estimator is easy to implement, as tuning parameters can be chosen as in a conventional RD analysis. An extensive simulation study illustrates the performance of our approach.
- Texte intégral [pdf]
- Lundi 17 avril 2023 17:30-18:45
- URA Takuya (UC Davis) : Slow Movers in Panel Data
- Co-author: Yuya Sasaki
- RésuméPanel data often contain stayers (units with no within-variations) and slow movers (units with little within-variations). In the presence of many slow movers, conventional econometric methods can fail to work. We propose a novel method of robust inference for the average partial effects in correlated random coefficient models robustly across various distributions of within-variations, including the cases with many stayers and/or many slow movers in a unified manner. In addition to this robustness property, our proposed method entails smaller biases and hence improves accuracy in inference compared to existing alternatives. Simulation studies demonstrate our theoretical claims about these properties: the conventional 95% confidence interval covers the true parameter value with 37-93% frequencies, whereas our proposed one achieves 93-96% coverage frequencies.
- Texte intégral [pdf]
- Lundi 27 mars 2023 16:00-17:15
- PSE, Campus Jourdan, Salle R1-15
- DE CHAISEMARTIN Clément (Sciences Po) : More Robust Estimators for Panel Bartik Designs, With An Application to the Effect of Chinese Imports on US Employment.
- Ziteng Lei
- RésuméWe show that panel Bartik regressions identify non-convex combinations of location-and-period-specific treatment effects. Thus, those regressions could be biased in the presence of heterogeneous effects. We propose two alternative correlated-random-coefficient (CRC) estimators, that are more robust to heterogeneous effects. We revisit Autor et al. (2013), who use a panel Bartik regression to estimate the effect of imports from China on US employment. Their regression estimates a highly non-convex combination of effects, and our CRC estimators are small and insignificant: without assuming constant effects, one cannot conclude that imports from China had a significantly negative effect on US employment.
- Texte intégral [pdf]
- Lundi 13 mars 2023
- CHETVERIKOV Denis (UCLA) : Seminar postponed.
- Lundi 13 février 2023 16:00-17:00
- BEYHUM Jad (ENSAI) : Nowcasting GDP with factor-augmented high-dimensional MIDAS regression
- Co-author: J. Striaukas
- RésuméThis paper introduces a factor-augmented high-dimensional mixed-frequency regression model to nowcast GDP growth. This new approach goes beyond classical sparse regression and factor models, by combining them. We derive rates of convergence of our estimator. The new technique is applied to nowcast GDP. We find that it significantly improves over a range of more classical nowcasting methods based on either sparse regression or factor models.
- Lundi 5 décembre 2022 16:00-17:15
- on Zoom
- KETZ Philipp (PSE - CNRS) : Allowing for weak identification when testing GARCH-X type models
- RésuméIn this paper, we use the results in Andrews and Cheng (2012), extended to allow for parameters to be near or at the boundary of the parameter space, to derive the asymptotic distributions of the two test statistics that are used in the two-step (testing) procedure proposed by Pedersen and Rahbek (2019). The latter aims at testing the null hypothesis that a GARCH-X type model, with exogenous covariates (X), reduces to a standard GARCH type model, while allowing the "GARCH parameter" to be unidentified. We then provide a characterization result for the asymptotic size of any test for testing this null hypothesis before numerically establishing a lower bound on the asymptotic size of the two-step procedure at the 5% nominal level. This lower bound exceeds the nominal level, revealing that the two-step procedure does not control asymptotic size. In a simulation study, we show that this finding is relevant for finite samples, in that the two-step procedure can suffer from overrejection in finite samples. We also propose a new test that, by construction, controls asymptotic size and is found to be more powerful than the two-step procedure when the "ARCH parameter" is "very small" (in which case the two-step procedure underrejects).
- Texte intégral [pdf]
- Lundi 21 novembre 2022 16:00-17:15
- PSE, room R1-13
- MUGNIER Martin (CREST, ENSAE, Institut Polytechnique de Paris) : Unobserved Clusters of Time-Varying Heterogeneity in Nonlinear Panel Data Models
- RésuméIn studies based on longitudinal data, researchers often assume time-invariant unobserved heterogeneity or linear-in-parameters conditional expectations. Violation of these assumptions may lead to poor counterfactuals. I study the identification and estimation of a large class of nonlinear grouped fixed effects (NGFE) models where the relationship between observed covariates and cross-sectional unobserved heterogeneity is left unrestricted but the latter only takes a restricted number of paths over time. I show that the corresponding clusters and the nonparametrically specified link function can be point-identified when both dimensions of the panel are large. I propose a semiparametric NGFE estimator whose implementation is feasible, and establish its large sample properties in popular binary and count outcome models. Distinctive features of the NGFE estimator are that it is asymptotically normal unbiased at parametric rates, and it allows for the number of periods to grow slowly with the number of cross-sectional units. Monte Carlo simulations suggest good finite sample performance. I apply this new method to revisit the so-called inverted-U relationship between product market competition and innovation. Allowing for clustered patterns of time-varying unobserved heterogeneity leads to a much flatter estimated curve.
- Texte intégral [pdf]
- Lundi 7 novembre 2022 16:00-17:15
- PSE, room R2-20
- SPINI Pietro (Bristol) : Robustness, Heterogeneous Treatment Effects and Covariate Shifts
- RésuméThis paper studies the robustness of estimated policy effects to changes in the distribution of covariates. Robustness to covariate shifts is important, for example, when evaluating the external validity of (quasi)-experimental results, which are often used as a benchmark for evidence-based policy-making. I propose a novel scalar robustness metric. This metric measures the magnitude of the smallest covariate shift needed to invalidate a claim on the policy effect (for example, ATE >=0) supported by the (quasi)-experimental evidence. My metric links the heterogeneity of policy effects and robustness in a flexible, nonparametric way and does not require functional form assumptions. I cast the estimation of the robustness metric as a de-biased GMM problem. This approach guarantees a parametric convergence rate for the robustness metric while allowing for machine learning-based estimators of policy effect heterogeneity (for example, lasso, random forest, boosting, neural nets). I apply my procedure to the Oregon Health Insurance experiment. I study the robustness of policy effects estimates of health-care utilization and financial strain outcomes, relative to a shift in the distribution of context-specific covariates. Such covariates are likely to differ across US states, making quantification of robustness an important exercise for adoption of the insurance policy in states other than Oregon. I find that the effect on outpatient visits is the most robust among the metrics of health-care utilization considered.
- Texte intégral [pdf]
- Lundi 10 octobre 2022 16:00-17:15
- PSE, room R1-13
- SUN Liyang (CEMFI) : Empirical Welfare Maximization with Constraints
- RésuméWhen designing eligibility criteria for welfare programs, policymakers naturally want to target the individuals who will benefit the most. This paper proposes two new econometric approaches to selecting an optimal eligibility criterion when individuals’ costs to the program are unknown and need to be estimated. One is designed to achieve the highest benefit possible while satisfying a budget constraint with high probability. The other is designed to optimally trade off the benefit and the cost from violating the budget constraint. The setting I consider extends the previous literature on Empirical Welfare Maximization by allowing for uncertainty in estimating the budget needed to implement the criterion, in addition to its benefit. Consequently, my approaches improve the existing approach as they can be applied to settings with imperfect take-up or varying program needs. I illustrate my approaches empirically by deriving an optimal budget-constrained Medicaid expansion in the US.
- Texte intégral [pdf]
- Lundi 26 septembre 2022 16:00-17:15
- CREST, room 3001
- HELM Ines (LMU Munich) : Inference for Ranks
- Co-authors: Sergei Bazylik, Magne Mogstad, Joseph P. Romano, and Azeem M. Shaikh
- RésuméThis talk is based on two papers: https://www.ucl.ac.uk/~uctpdwi/papers/cwp0422.pdf and https://www.ucl.ac.uk/~uctpdwi/papers/cwp4021.pdf.
- Lundi 12 septembre 2022 16:00-17:15
- SEMENOVA Vira (Berkeley) : Automated Inference on Sharp Bounds
- RésuméMany causal parameters involving the joint distribution of potential outcomes in treated and control states cannot be point-identified, but only be bounded from above and below. The bounds can be further tightened by conditioning on pre-treatment covariates, and the sharp version of the bounds corresponds to using a full covariate vector. This paper gives a method for estimation and inference on sharp bounds determined by a linear system of under-identified equalities (e.g., as in Heckman et al (ReSTUD, 1997)). In the sharp bounds’ case, the RHS of this system involves a nuisance function of (many) covariates (e.g., the conditional probability of employment in treated or control state). Combining Neyman-orthogonality and sample splitting, I provide an asymptotically Gaussian estimator of sharp bound that does not require solving the linear system in closed form. I demonstrate the method in an empirical application to Connecticut’s Jobs First welfare reform experiment.
- Lundi 27 juin 2022 16:00-17:15
- YOUNG Alwyn (LSE) : Consistency without Inference: Instrumental Variables in Practical Application
- RésuméI use Monte Carlo simulations, the jackknife and multiple forms of the bootstrap to study a comprehensive sample of 1309 instrumental variables regressions in 30 papers published in the journals of the American Economic Association. Monte Carlo simulations based upon published regressions show that non-iid error processes in highly leveraged regressions, both prominent features of published work, adversely affect the size and power of IV tests, while increasing the bias and mean squared error of IV relative to OLS. Weak instrument pre-tests based upon F statistics are found to be largely uninformative of both size and bias. In published papers IV has little power as, despite producing substantively different estimates, it rarely rejects the OLS point estimate or the null that OLS is unbiased, while the statistical significance of excluded instruments is exaggerated.
- Texte intégral [pdf]
- Lundi 20 juin 2022 16:00-17:15
- CREST, 5 Av. Le Chatelier, 91120 Palaiseau
- MAUREL Arnaud (Duke University) : Heterogeneity, Uncertainty and Learning: A Semiparametric Identification Analysis
- Co-authors: J. Bunting and P. Diegert
- RésuméIn this paper, we provide new semiparametric identification results for a general class of learning model in which outcomes of interest depend on i) predictable heterogeneity, ii) initially unpredictable heterogeneity that may be revealed over time, and iii) transitory uncertainty. We consider a common environment where the researcher only has access to longitudinal data on choices and outcomes. We establish point-identification of the outcome equation parameters and the distribution of the three types of unobservables, under the standard assumption that unpredictable heterogeneity and uncertainty are normally distributed. We also show that a pure learning model remains identified without making any distributional assumption. We then derive a sieve MLE estimator for the model parameters, which is shown to exhibit good finite-sample performances and is very tractable in practice.
- Lundi 13 juin 2022 16:00-17:15
- YOUNG Alwyn (LSE) : This talk has been cancelled and will be rescheduled.
- Texte intégral [pdf]
- Lundi 30 mai 2022 16:00-17:15
- STOULI Sami (Bristol) : Gaussian transforms modeling and the estimation of distributional regression functions
- Co-author: Richard Spady
- RésuméWe propose flexible Gaussian representations for conditional cumulative distribution functions and give a concave likelihood criterion for their estimation. Optimal representations satisfy the monotonicity property of conditional cumulative distribution functions, including in finite samples and under general misspecification. We use these representations to provide a unified framework for the flexible Maximum Likelihood estimation of conditional density, cumulative distribution, and quantile functions at parametric rate. Our formulation yields substantial simplifications and finite sample improvements over related methods. An empirical application to the gender wage gap in the United States illustrates our framework.
- Texte intégral [pdf]
- Lundi 16 mai 2022 16:00-17:15
- R1-14
- MCCLOSKEY Adam (University of Colorado, Boulder ) : Short and Simple Confidence Intervals when the Directions of Some Effects are Known
- Co-author: Philipp Ketz
- RésuméWe introduce adaptive confidence intervals on a parameter of interest in the presence of nuisance parameters, such as coefficients on control variables, with known signs. Our confidence intervals are trivial to compute and can provide significant length reductions relative to standard ones when the nuisance parameters are small. At the same time, they entail minimal length increases at any parameter values. We apply our confidence intervals to the linear regression model, prove their uniform validity and illustrate their length properties in an empirical application to a factorial design field experiment and a Monte Carlo study calibrated to the empirical application.
- Texte intégral [pdf]
- Jeudi 14 avril 2022 11:00-12:00
- PSE, Salle R2.21
- DE PAULA Aureo (UCL) : Identifying Network Ties from Panel Data: Theory and an Application to Tax Competition
- Co-authors: Imran Rasul and Pedro CL Souza
- RésuméSocial interactions determine many economic behaviors, but information on social ties does not exist in most publicly available and widely used datasets. We present results on the identification of social networks from observational panel data that contains no information on social ties between agents. In the context of a canonical social interactions model, we provide sufficient conditions under which the social interactions matrix, endogenous and exogenous social effect parameters are all globally identified. While this result is relevant across different estimation strategies, we then describe how high-dimensional estimation techniques can be used to estimate the interactions model based on the Adaptive Elastic Net GMM method. We employ the method to study tax competition across US states. We find the identified social interactions matrix implies tax competition differs markedly from the common assumption of competition between geographically neighboring states, providing further insights for the long-standing debate on the relative roles of factor mobility and yardstick competition in driving tax setting behavior across states. Most broadly, our identification and application show the analysis of social interactions can be extended to economic realms where no network data exists.
- Texte intégral [pdf]
- Vendredi 8 avril 2022 14:30-15:45
- CREST, Salle 3001
- MILLER Robert (Cargnegie Mellon University) : Search and Matching by Race and Gender
- Co-author: Rebecca Lessem
- RésuméThis project uses data from a large firm that provided information on all job applications as well as labor market outcomes within the firm over a 5 year period. Careful analysis of the data shows that African Americans and women engage in more overt job search activity within the organization than Caucasian males, attain shorter tenure on each job, and experience slower wage growth. Furthermore, we see some differences across race and gender when we look at each stage of the application process. In particular, we see that African Americans are more likely to apply for positions that they do not meet the minimal qualifications for, and both African Americans and women are more likely to withdraw from the application process. We also see that African Americans are less likely to be interviewed for a position, although we do not see any differences with race for hiring probabilities conditional on being interviewed. To explain these empirical patterns, we develop and estimate a model of two sided search and matching, in which positions become vacant when the current occupant of the job leaves, the firm begins a search process by advertising the position, and workers employed both inside and outside the organization apply for the newly vacated position. Workers choose their intensity of job search by setting a threshold above which they would accept a job offer. The applicants are culled during a hiring process that lead both parties to become more informed about the potential job match with each applicant. The successful applicant accumulates experience on-the-job. After estimating the model, we will use counterfactuals to understand more about the differences in the search and matching process across racial and gender groups, as well as how that affects wage outcomes. First, we know in the data that the durations that people stay in a job differs by race and gender. Our counterfactuals can analyze how large of a role these durations play in the hiring process. Second, we can study how outcomes would change if the hiring committee is forced to interview more or fewer candidates. This can help to understand how institutional restrictions will affect the likelihood that an individual is offered a position.
- Lundi 21 mars 2022 16:00-17:15
- DOVONON Prosper (Concordia University) : Specification Testing for Conditional Moment Restrictions under Local Identification Failure
- Co-author: Nikolay Gospodinov
- RésuméIn this paper, we study the asymptotic behavior of the specification test in conditional moment restrictions model under first-order local identification failure with dependent data. More specifically, we obtain conditions under which the conventional specification test for conditional moment restrictions remains valid when first-order local identification fails but global identification is still attainable. In the process, we obtain some novel intermediate results that include extending the first- and second-order local identification framework to models defined by conditional moment restrictions, characterizing the rate of convergence of the GMM estimator and the limiting representation for degenerate U-statistics under strong mixing dependence. Simulation and empirical results illustrate the properties and the practical relevance of the proposed testing framework.
- Texte intégral [pdf]
- Lundi 7 mars 2022 16:00-17:15
- BEYHUM Jad (ENSAI) : Instrumental variable estimation of dynamic treatment effects on a survival outcome
- Co-authors: Samuele Centorrino, Jean-Pierre Florens, and Ingrid Van Keilegom
- RésuméThis paper considers identification and estimation of the causal effect of the time Z until a subject is treated on a survival outcome T. The treatment is not randomly assigned, T is randomly right censored by a random variable C and the time to treatment Z is right censored by min(T,C) The endogeneity issue is treated using an instrumental variable explaining Z and independent of the error term of the model. We study identification in a fully nonparametric framework. We show that our specification generates an integral equation, of which the regression function of interest is a solution. We provide identification conditions that rely on this identification equation. For estimation purposes, we assume that the regression function follows a parametric model. We propose an estimation procedure and give conditions under which the estimator is asymptotically normal. The estimators exhibit good finite sample properties in simulations. Our methodology is applied to find evidence supporting the efficacy of a therapy for burn-out.
- Texte intégral [pdf]
- Lundi 14 février 2022 16:00-17:15
- ANANTH Abhishek (University of Geneva) : Optimal Treatment Assignment Rules on Networked Populations
- RésuméI study the problem of optimally distributing treatments among individuals on a network in the presence of spillovers in the effect of treatment across linked individuals. In this paper, I consider the problem of a planner who needs to distribute a limited number of preventative treatments (e.g., vaccines) for a deadly infectious disease among individuals in a target village in order to maximize population welfare. Since the planner does not know the extent of spillovers or the heterogeneity in treatment effects, she uses data coming from an experiment conducted in a separate pilot village. By placing restrictions on how others’ treatments affect one’s outcome on the contact network, I derive theoretical limits on how the data from the experiment could be used to best allocate the treatments when the planner observes the contact network structure in both the target and pilot village. For this purpose, I extend the empirical welfare maximization (EWM) procedure to derive an optimal statistical treatment rule. Under restrictions on the shape of the contact network, I provide finite sample bounds for the uniform regret (a measure of the effectiveness of a treatment rule). The main takeaway is that the uniform regret associated with EWM, extended to account for spillovers, converges to 0 at the parametric rate as the size of the pilot experiment grows. I also show that no statistical treatment rule admits a faster rate of convergence for the uniform regret, suggesting that the EWM procedure is rate-optimal.
- Texte intégral [pdf]
- Lundi 31 janvier 2022 16:00-17:15
- HIRSHBERG David (Emory) : The Basis for Inference based on Synthetic Control Methods
- RésuméSynthetic Control methods are becoming popular far beyond the context of comparative case studies in which they first proposed. It is no longer the rule that they are used only when we have one (or few) treated units. But despite recent attention, there is little consensus on when they work and how to do inference based on them. That there is no one way to think about panel data makes this difficult. In some interpretations, we are solving what is essentially a matrix completion problem with noise that is completely unrelated to selection of treatment; in others, we are inverse propensity weighting to adjust for the selection of treatment based on past outcomes, noise and all. In this talk, I will discuss some results characterizing synthetic control estimation based on these two interpretations, drawing on the literature on synthetic control estimators for panel data as well as that on covariate balancing or calibrated inverse propensity weighting estimators for cross-sectional data. And I will highlight some issues that become apparent when we try to mix these perspectives, approaching inference based on selection of treatment from a perspective in which behaviors specific to individual units, i.e. fixed effects— interactive or otherwise, are needed to explain the heterogeneity of the data.
- Lundi 13 décembre 2021 16:00-17:15
- KOLESAR Michal (Princeton) : On Estimating Multiple Treatment Effects with Regression
- Co-authors: Paul Goldsmith-Pinkham and Peter Hull
- RésuméWe study the causal interpretation of regressions on multiple dependent treatments and flexible controls. Such regressions are often used to analyze randomized control trials with multiple intervention arms, and to estimate institutional quality (e.g. teacher value-added) with observational data. We show that, unlike with a single binary treatment, these regressions do not generally estimate convex averages of causal effects-even when the treatments are conditionally randomly assigned and the controls fully address omitted variables bias. We discuss different solutions to this issue, and propose as a solution anew class of efficient estimators of weighted average treatment effects.
- Texte intégral [pdf]
- Lundi 29 novembre 2021 16:00-17:15
- ESCANCIANO Juan Carlos (UC3M) : Debiased Semiparametric U-Statistics: with an Application to Inequality of Opportunity
- Lundi 8 novembre 2021 17:30-18:45
- SANTOS Andres (UCLA) : Inference for Large-Scale Linear Systems with Known Coefficients
- Co-authors: Z. Fang, A. Shaikh, and A. Torgovitsky
- RésuméThis paper considers the problem of testing whether there exists a non-negative solution to a possibly under-determined system of linear equations with known coefficients. This hypothesis testing problem arises naturally in a number of settings, including random coefficient, treatment effect, and discrete choice models, as well as a class of linear programming problems. As a first contribution, we obtain a novel geometric characterization of the null hypothesis in terms of identified parameters satisfying an infinite set of inequality restrictions. Using this characterization, we devise a test that requires solving only linear programs for its implementation, and thus remains computationally feasible in the high-dimensional applications that motivate our analysis. The asymptotic size of the proposed test is shown to equal at most the nominal level uniformly over a large class of distributions that permits the number of linear equations to grow with the sample size.
- Texte intégral [pdf]
- Lundi 18 octobre 2021 16:00-17:15
- ROTH Jonathan (Brown University) : Efficient Estimation for Staggered Rollout Designs
- Co-author: Pedro Sant'Anna
- RésuméThis paper studies efficient estimation of causal effects when treatment is (quasi-) randomly rolled out to units at different points in time. We solve for the most efficient estimator in a class of estimators that nests two-way fixed effects models and other popular generalized difference-in-differences methods. A feasible plug-in version of the efficient estimator is asymptotically unbiased with efficiency (weakly) dominating that of existing approaches. We provide both t-based and permutation-test based methods for inference. We illustrate the performance of the plug-in efficient estimator in simulations and in an application to Wood et al. (2020a)'s study of the staggered rollout of a procedural justice training program for police officers. We find that confidence intervals based on the plug-in efficient estimator have good coverage and can be as much as five times shorter than confidence intervals based on existing state-of-the-art methods. As an empirical contribution of independent interest, our application provides the most precise estimates to date on the effectiveness of procedural justice training programs for police officers.
- Texte intégral [pdf]
- Lundi 4 octobre 2021 16:00-17:15
- MOREIRA Marcelo (FGV) : Efficiency Loss of Asymptotically Efficient Tests in An Instrumental Variables Regression + Optimal Invariant Tests in an Instrumental Variables Regression With Heteroskedastic and Autocorrelated Errors
- Co-authors: Geert Ridder and Mahrad Sharifvaghefi
- Lundi 27 septembre 2021 16:00-17:00
- HAZARD Yagan (Paris School of Economics) : Rescuing low-compliance RCTs
- Co-author: Simon Loewe
- Lundi 14 juin 2021 16:00-17:15
- Online
- KOOPMAN Siem Jan ( Vrije Universiteit Amsterdam) : Forecasting in a changing world: from the great recession to the COVID-19 pandemic
- Co-authors: Mariia Artemova, Francisco Blasques, and Zhaokun Zhang
- RésuméWe develop a new targeted maximum likelihood estimation method that provides improved forecasting for misspecified linear dynamic models. The method weighs data points in the observed sample and is useful in the presence of data generating processes featuring structural breaks, complex nonlinearities, or other time-varying properties which cannot be easily captured by model design. Additionally, the method reduces to classical maximum likelihood when the model is well specified, which results in weights which are set uniformly to one. We show how the optimal weights can be set by means of a cross-validation procedure. In a set of Monte Carlo experiments we reveal that the estimation method can significantly improve the forecasting accuracy of autoregressive models. In an empirical study concerned with forecasting the U.S. Industrial Production, we show that the forecast accuracy during the Great Recession can be significantly improved by giving greater weight to observations associated with past recessions. We further establish that the same empirical finding can be found for the 2008-2009 global financial crisis, for different macroeconomic time series, and for the COVID-19 recession in 2020.
- Texte intégral [pdf]
- Lundi 10 mai 2021 16:00-17:15
- online
- ABADIE Alberto (MIT) : A Penalized Synthetic Control Estimator for Disaggregated Data
- Co-author: Jérémy L'Hour
- RésuméSynthetic control methods are commonly applied in empirical research to estimate the effects of treatments or interventions on aggregate outcomes. A synthetic control estimator compares the outcome of a treated unit to the outcome of a weighted average of untreated units that best resembles the characteristics of the treated unit before the intervention. When disaggregated data are available, constructing separate synthetic controls for each treated unit may help avoid interpolation biases. However, the problem of finding a synthetic control that best reproduces the characteristics of a treated unit may not have a unique solution. Multiplicity of solutions is a particularly daunting challenge when the data includes many treated and untreated units. To address this challenge, we propose a synthetic control estimator that penalizes the pairwise discrepancies between the characteristics of the treated units and the characteristics of the units that contribute to their synthetic controls. The penalization parameter trades off pairwise matching discrepancies with respect to the characteristics of each unit in the synthetic control against matching discrepancies with respect to the characteristics of the synthetic control unit as a whole. We study the properties of this estimator and propose data-driven choices of the penalization parameter.
- Lundi 12 avril 2021 16:00-17:15
- KOCK Anders (Aarhus University/University of Oxford) : Consistency of p-norm based tests in high-dimensions: characterization, monotonicity, domination
- Co-author: David Preinerstorfer
- RésuméTo understand how the choice of a norm affects power properties of tests in high-dimensions, we study the consistency sets of p-norm based tests in the prototypical framework of sequence models with unrestricted parameter spaces. The consistency set of a test is here defined as the set of all arrays of alternatives the test is consistent against as the dimension of the parameter space diverges. We characterize the consistency sets of p-norm based tests and find, in particular, that the consistency against an array of alternatives can not be determined solely in terms of the p-norm of the alternative. Our characterization also reveals an unexpected monotonicity result: namely that the consistency set is strictly increasing in p \in (0,\infty), such that tests based on higher p strictly dominate those based on lower p in terms of consistency. This monotonicity allows us to construct novel tests that dominate, with respect to their consistency behavior, all p-norm based tests without sacrificing asymptotic size.
- Texte intégral [pdf]
- Lundi 8 mars 2021 16:00-17:15
- KASY Maximilian (University of Oxford) : The social impact of algorithmic decision making: Economic perspectives
- https://maxkasy.github.io/home/files/papers/adaptive_combinatorial.pdf
- Texte intégral [pdf]
- Lundi 8 février 2021 16:00-17:15
- online
- RAI Yoshiyasu (University of Mannheim) : Statistical Inference for Treatment Assignment Policies
- RésuméIn this paper, I study the statistical inference problem for treatment assignment policies. In typical applications, individuals with different characteristics are expected to differ in their responses to treatment. Hence, treatment assignment policies that allocate treatment based on individuals’ observed characteristics can have a significant influence on outcomes and welfare. A growing literature proposes various approaches to estimating the welfare-maximizing treatment assignment policy. This paper complements this work on estimation by developing a method of inference for treatment assignment policies that can be used to assessing the precision of estimated optimal policies. In particular, for the welfare criterion used by Kitagawa and Tetenov (2018), my method constructs (i) a confidence set for the optimal policy and (ii) a confidence interval for the maximized welfare. A simulation study indicates that the proposed methods work well with modest sample size. I apply the method to experimental data from the National Job Training Partnership Act study.
- Lundi 14 décembre 2020 16:00-17:15
- FREYBERGER Joachim (University of Bonn) : Normalizations and misspecification in skill formation models
- RésuméAn important class of structural models investigates the determinants of skill formation and the optimal timing of interventions. To achieve point identification of the parameters, researcher typically normalize the scale and location of the unobserved skills. This paper shows that these seemingly innocuous restrictions can severely impact the interpretation of the parameters and counterfactual predictions. For example, simply changing the units of measurements of observed variables might yield ineffective investment strategies and misleading policy recommendations. To tackle these problems, this paper provides a new identification analysis, which pools all restrictions of the model, characterizes the identified set of all parameters without normalizations, illustrates which features depend on these normalizations, and introduces a new set of important policy-relevant parameters that are identified under weak assumptions and yield robust conclusions. As a byproduct, this paper also presents a general and formal definition of when restrictions are truly normalizations.
- Texte intégral [pdf]
- Lundi 9 novembre 2020 16:00-17:15
- RENAULT Jérôme (TSE) : Approximate Maximum Likelihood for Complex Structural Models
- Co-authors: D.T. Frazier and V. Czellar
- RésuméIndirect Inference (I-I) is a popular technique for estimating complex parametric models whose likelihood function is intractable, however, the statistical efficiency of I-I estimation is questionable. While the efficient method of moments, Gallant and Tauchen (1996), promises efficiency, the price to pay for this efficiency is a loss of parsimony and thereby a potential lack of robustness to model misspecification. This stands in contrast to simpler I-I estimation strategies, which are known to display less sensitivity to model misspecification precisely due to their focus on specific elements of the underlying structural model. In this research, we propose a new simulation-based approach that maintains the parsimony of I-I estimation, which is often critical in empirical applications, but can also deliver estimators that are nearly as efficient as maximum likelihood. This new approach is based on using a constrained approximation to the structural model, which ensures identification and can deliver estimators that are nearly efficient. We demonstrate this approach through several examples, and show that this approach can deliver estimators that are nearly as efficient as maximum likelihood, when feasible, but can be employed in many situations where maximum likelihood is infeasible.
- Texte intégral [pdf]
- Lundi 12 octobre 2020 16:00-17:15
- on line
- GUNSILIUS Florian (University of Michigan) : Distributional synthetic controls
- RésuméThis article extends the method of synthetic controls to probability measures. The distribution of the synthetic control group is obtained as the optimally weighted barycenter in Wasserstein space of the distributions of the control groups which minimizes the distance to the distribution of the treatment group. It can be applied to settings with disaggregated- or aggregated (functional) data. The method produces a generically unique counterfactual distribution when the data are continuously distributed. A basic representation of the barycenter provides a computationally efficient implementation via a straightforward tensor-variate regression approach. In addition, identification results are provided that also shed new light on the classical synthetic controls estimator. As an illustration, the method provides an estimate of the counterfactual distribution of household income in Colorado one year after Amendment 64.
- Texte intégral [pdf]
- Lundi 14 septembre 2020 16:00-17:15
- KAMAT Vishal (Toulouse School of Economics) : Estimating the Welfare Effects of School Vouchers
- Co-author: S. Norris
- RésuméWe analyze the welfare effects of voucher provision in the DC Opportunity Scholarship Program (OSP), a school voucher program in Washington, DC, that randomly allocated vouchers to students. To do so, we develop new discrete choice tools to show how to use data with random allocation of school vouchers to characterize what we can learn about the welfare benefits of providing a voucher of a given amount, as measured by the average willingness to pay for that voucher, and these benefits net of the costs of providing that voucher. A novel feature of our tools is that they allow specifying the relationship of the demand for the various schools with respect to prices to be entirely nonparametric or to be parameterized in a flexible manner, both of which do not necessarily imply that the welfare parameters are point identified. Applying our tools to the OSP data, we find that provision of the status-quo as well as a wide range of counterfactual voucher amounts has a positive net average benefit. We find these positive results arise due to the presence of many low-tuition schools in the program, removing these schools from the program can result in a negative net average benefit.
- Texte intégral [pdf]