showSidebars ==
showTitleBreadcrumbs == 1
node.field_disable_title_breadcrumbs.value ==

SMU SOE Seminar (Mar 9, 2018): On the Iterated Estimation of Dynamic Discrete Choice Games

Please click here if you are unable to view this page.

 

 

TOPIC: 

ON THE ITERATED ESTIMATION OF DYNAMIC DISCRETE CHOICE GAMES

This paper investigates the asymptotic properties of a class of estimators of the structural parameters in dynamic discrete choice games. We consider K-stage policy iteration (PI) estimators, where K ∈  denotes the number of policy iterations employed in the estimation. This class nests several estimators proposed in the literature. By considering a "maximum likelihood" criterion function, our estimator becomes the K-ML estimator in Aguirregabiria and Mira (2002, 2007). By considering a "minimum distance" criterion function, it de fines a new K-MD estimator, which is an iterative version of the estimators in Pesendorfer and Schmidt-Dengler (2008) and Pakes et al. (2007).
 
First, we establish that the K-ML estimator is consistent and asymptotically normal for any ∈ This complements ndings in Aguirregabiria and Mira (2007), who focus on K = 1 and K large enough to induce convergence of the estimator. Furthermore, we show that the asymptotic variance of the K-ML
estimator can exhibit arbitrary patterns as a function K.
 
Second, we establish that the K-MD estimator is consistent and asymptotically normal for any fi xed ∈ . For a specifi c choice of the weight matrix, the K-MD estimator has the same asymptotic distribution as the K-ML estimator. Our main result provides an optimal sequence of weight matrices for the K-MD estimator and shows that the optimally weighted K-MD estimator has an asymptotic distribution that is invariant to K. This new result is especially unexpected given the findings in Aguirregabiria and Mira (2007) for K-ML estimators. Our main result implies two new and important corollaries about the optimal 1-MD estimator (derived by Pesendorfer and Schmidt-Dengler (2008)). First, the optimal 1-MD estimator is optimal in the class of K-MD estimators for all K ∈ . In other words, additional policy iterations do not provide asymptotic efficiency gains relative to the optimal 1-MD estimator. Second, the optimal 1-MD estimator is more or equally asymptotically efficient than any K-ML estimator for all ∈ .
 
Keywords: Dynamic Discrete Choice Problems, Dynamic Games, Pseudo Maximum Likelihood Estimator, Minimum Distance Estimator, Estimation, Asymptotic Efficiency.
 
JEL Classification: C13, C61, C73
 
Click here to view the paper.

Click here to view the CV.

 

 

 

Federico Bugni

Duke University

Theoretical and Applied Econometrics
 

9 March 2018 (Friday)

4pm - 5.30pm

Meeting Room 5.1, Level 5
School of Economics 
Singapore Management University
90 Stamford Road
Singapore 178903