Séminaires
Séminaire d’économétrie
Ce séminaire porte sur l’économétrie théorique et appliquée. Il a lieu le deuxième lundi de chaque mois.
> Responsable scientifique : Philipp Ketz
> Responsable administratif : Sophie Gozlan
- Pour vous inscrire à la liste du séminaire Econométrie et recevoir par email les annonces des sessions : suivre ce lien
Ce séminaire bénéficie d’une aide de l’État gérée par l’Agence Nationale de la Recherche au titre du programme d’Investissement d’avenir portant la référence ANR-17-EURE-0001.
Prochainement
- Lundi 12 avril 2021 16:00-17:15
- KOCK Anders (Aarhus University/University of Oxford) : Consistency of p-norm based tests in high-dimensions: characterization, monotonicity, domination
- Co-author: David Preinerstorfer
- RésuméTo understand how the choice of a norm affects power properties of tests in high-dimensions, we study the consistency sets of p-norm based tests in the prototypical framework of sequence models with unrestricted parameter spaces. The consistency set of a test is here defined as the set of all arrays of alternatives the test is consistent against as the dimension of the parameter space diverges. We characterize the consistency sets of p-norm based tests and find, in particular, that the consistency against an array of alternatives can not be determined solely in terms of the p-norm of the alternative. Our characterization also reveals an unexpected monotonicity result: namely that the consistency set is strictly increasing in p \in (0,\infty), such that tests based on higher p strictly dominate those based on lower p in terms of consistency. This monotonicity allows us to construct novel tests that dominate, with respect to their consistency behavior, all p-norm based tests without sacrificing asymptotic size.
- Texte intégral [pdf]
- Lundi 10 mai 2021 16:00-17:15
- ABADIE Alberto (Harvard University ) : TBA
- Lundi 14 juin 2021 16:00-17:15
- KOOPMAN Siem Jan ( Vrije Universiteit Amsterdam) : TBA
Archives
- Lundi 8 mars 2021 16:00-17:15
- KASY Maximilian (University of Oxford) : The social impact of algorithmic decision making: Economic perspectives
- https://maxkasy.github.io/home/files/papers/adaptive_combinatorial.pdf
- Texte intégral [pdf]
- Lundi 8 février 2021 16:00-17:15
- online
- RAI Yoshiyasu (University of Mannheim) : Statistical Inference for Treatment Assignment Policies
- RésuméIn this paper, I study the statistical inference problem for treatment assignment policies. In typical applications, individuals with different characteristics are expected to differ in their responses to treatment. Hence, treatment assignment policies that allocate treatment based on individuals’ observed characteristics can have a significant influence on outcomes and welfare. A growing literature proposes various approaches to estimating the welfare-maximizing treatment assignment policy. This paper complements this work on estimation by developing a method of inference for treatment assignment policies that can be used to assessing the precision of estimated optimal policies. In particular, for the welfare criterion used by Kitagawa and Tetenov (2018), my method constructs (i) a confidence set for the optimal policy and (ii) a confidence interval for the maximized welfare. A simulation study indicates that the proposed methods work well with modest sample size. I apply the method to experimental data from the National Job Training Partnership Act study.
- Lundi 14 décembre 2020 16:00-17:15
- FREYBERGER Joachim (University of Bonn) : Normalizations and misspecification in skill formation models
- RésuméAn important class of structural models investigates the determinants of skill formation and the optimal timing of interventions. To achieve point identification of the parameters, researcher typically normalize the scale and location of the unobserved skills. This paper shows that these seemingly innocuous restrictions can severely impact the interpretation of the parameters and counterfactual predictions. For example, simply changing the units of measurements of observed variables might yield ineffective investment strategies and misleading policy recommendations. To tackle these problems, this paper provides a new identification analysis, which pools all restrictions of the model, characterizes the identified set of all parameters without normalizations, illustrates which features depend on these normalizations, and introduces a new set of important policy-relevant parameters that are identified under weak assumptions and yield robust conclusions. As a byproduct, this paper also presents a general and formal definition of when restrictions are truly normalizations.
- Texte intégral [pdf]
- Lundi 9 novembre 2020 16:00-17:15
- RENAULT Eric (University of Warwick) : Approximate Maximum Likelihood for Complex Structural Models
- Co-authors: D.T. Frazier and V. Czellar
- RésuméIndirect Inference (I-I) is a popular technique for estimating complex parametric models whose likelihood function is intractable, however, the statistical efficiency of I-I estimation is questionable. While the efficient method of moments, Gallant and Tauchen (1996), promises efficiency, the price to pay for this efficiency is a loss of parsimony and thereby a potential lack of robustness to model misspecification. This stands in contrast to simpler I-I estimation strategies, which are known to display less sensitivity to model misspecification precisely due to their focus on specific elements of the underlying structural model. In this research, we propose a new simulation-based approach that maintains the parsimony of I-I estimation, which is often critical in empirical applications, but can also deliver estimators that are nearly as efficient as maximum likelihood. This new approach is based on using a constrained approximation to the structural model, which ensures identification and can deliver estimators that are nearly efficient. We demonstrate this approach through several examples, and show that this approach can deliver estimators that are nearly as efficient as maximum likelihood, when feasible, but can be employed in many situations where maximum likelihood is infeasible.
- Texte intégral [pdf]
- Lundi 12 octobre 2020 16:00-17:15
- on line
- GUNSILIUS Florian (University of Michigan) : Distributional synthetic controls
- RésuméThis article extends the method of synthetic controls to probability measures. The distribution of the synthetic control group is obtained as the optimally weighted barycenter in Wasserstein space of the distributions of the control groups which minimizes the distance to the distribution of the treatment group. It can be applied to settings with disaggregated- or aggregated (functional) data. The method produces a generically unique counterfactual distribution when the data are continuously distributed. A basic representation of the barycenter provides a computationally efficient implementation via a straightforward tensor-variate regression approach. In addition, identification results are provided that also shed new light on the classical synthetic controls estimator. As an illustration, the method provides an estimate of the counterfactual distribution of household income in Colorado one year after Amendment 64.
- Texte intégral [pdf]
- Lundi 14 septembre 2020 16:00-17:15
- KAMAT Vishal (Toulouse School of Economics) : Estimating the Welfare Effects of School Vouchers
- Co-author: S. Norris
- RésuméWe analyze the welfare effects of voucher provision in the DC Opportunity Scholarship Program (OSP), a school voucher program in Washington, DC, that randomly allocated vouchers to students. To do so, we develop new discrete choice tools to show how to use data with random allocation of school vouchers to characterize what we can learn about the welfare benefits of providing a voucher of a given amount, as measured by the average willingness to pay for that voucher, and these benefits net of the costs of providing that voucher. A novel feature of our tools is that they allow specifying the relationship of the demand for the various schools with respect to prices to be entirely nonparametric or to be parameterized in a flexible manner, both of which do not necessarily imply that the welfare parameters are point identified. Applying our tools to the OSP data, we find that provision of the status-quo as well as a wide range of counterfactual voucher amounts has a positive net average benefit. We find these positive results arise due to the presence of many low-tuition schools in the program, removing these schools from the program can result in a negative net average benefit.
- Texte intégral [pdf]