Crowder and Stephens note that the probability generating function for the observed aggregate counts has a fairly simple form. This motivates an estimation procedure based on trying to match the quantities
Through a series of simulations, the authors demonstrate that an improvement in efficiency compared to methods based on second moments is possible for small sample sizes (e.g. n <= 25). A comparison is made with the efficiency for micro-data (i.e. where transition counts are known). However, for such small samples computation of the full likelihood for the aggregate data must become a viable option. The practical issues are whether the pgf based approach gives any real improvement in efficiency compared to second moment approaches for say n=100 (the second moment approaches are asymptotically efficient with appropriate weights) or whether the pgf method outperforms (or matches) the full likelihood for small samples sizes (i.e. n<=25) where the full likelihood is calculable. These issues aren't really addressed in the paper.
No comments:
Post a Comment