Martin Crowder and David Stephens have a new paper in Journal of Statistical Planning and Inference. This considers estimation of a discrete-time homogeneous Markov chain from aggregate data (which they term macro-data), ie. where only the overall state counts for N patients are known at time j, while the transition counts are unknown. The likelihood for such data is intractable because it involves summing over the vast number of possible transitions that are consistent with the observed aggregate counts. As a result, inference methods have focused on moment based estimation (see e.g. Kalbfleisch, Lawless and Vollmer, Biometrics 1983) which are reasonably effective in practice.
Crowder and Stephens note that the probability generating function for the observed aggregate counts has a fairly simple form. This motivates an estimation procedure based on trying to match the quantities with their expectations (i.e. the pgfs). An obvious practical issue is the choice of vectors to use to compare the closeness between the pgf and sample quantities. The authors propose choices that avoid computational problems for large sample sizes.
Through a series of simulations, the authors demonstrate that an improvement in efficiency compared to methods based on second moments is possible for small sample sizes (e.g. n <= 25). A comparison is made with the efficiency for micro-data (i.e. where transition counts are known). However, for such small samples computation of the full likelihood for the aggregate data must become a viable option. The practical issues are whether the pgf based approach gives any real improvement in efficiency compared to second moment approaches for say n=100 (the second moment approaches are asymptotically efficient with appropriate weights) or whether the pgf method outperforms (or matches) the full likelihood for small samples sizes (i.e. n<=25) where the full likelihood is calculable. These issues aren't really addressed in the paper.
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment