Saturday, 30 March 2013

Simulation Based Confidence Intervals for Functions with Complicated Derivatives


Micha Mandel has a new paper in the American Statistician. This concerns the simulation based Delta method approach to obtaining asymptotic confidence intervals which has been used quite extensively for obtaining confidence intervals of transition probabilities in multi-state models. This essentially involves assuming and constructing confidence intervals for by simulating and then considering the empirical distribution of

A formal justification for the approach is laid out and some simulations of its performance compared to the standard Delta method in some important cases is given. It is stated that the utility of the simulation method is not in situations when the Delta method fails, but in situations where the calculating the derivatives needed for the Delta method is difficult. In particular, it still requires the functional g to be differentiable. This seems to down play the simulation method slightly. One advantage of the simulation method is that it is not necessary to linearize g around the MLE. To take a pathological example consider data that are . Suppose we are interested in estimating . When the coverage of the Delta method confidence interval will be anti-conservative because has a point of inflection at . The simulation based will work fine in this situation (as obviously would just constructing a confidence interval for and just cubing the upper and lower limits!).

Tuesday, 5 February 2013

The gradient function as an exploratory goodness-of-fit assessment of the random-effects distribution in mixed models


Geert Verbeke and Geert Molenberghs have a new paper in Biostatistics. The paper proposes the use of the gradient function (or equivalently the directional derivatives) of the marginal likelihood with respect to the random effects distribution, as a way of assessing goodness-of-fit in a mixed model. They concentrate on cases related to standard longitudinal data analysis using linear (or generalized linear) mixed models, however the method can be extended to other mixed models, such as clustered multi-state models with multivariate (log)-normal random effects.

If we consider data from units i with observations x_i, given a mixing distribution G, we can say the marginal density is given by .

The gradient function is then taken as where N is the total number of independent clusters

The use of the gradient function stems from finite mixture models and in particular the problem of finding the non-parametric maximum likelihood estimate of the mixing distribution. At the NPMLE the gradient function has a supremum of 1. If instead we assume there is a parametric mixing distribution, under correct specification the gradient function should be close to 1 across all values of u. Verbeke and Molenberghs use this property to construct an informal graphical diagnostic of the appropriateness of the proposed random effects distribution.

An advantage of the approach is that essentially no additional calculations are required to compute the measure, above and beyond those already needed for estimation of the parametric mixture model itself. A current limitation of the approach is that there is no formal test to assess whether the observed deviation is statistically significant. It is stated that this is ongoing work. It seems reasonably straightforward to show that the gradient function will tend to a Gaussian process with mean 1 but with a quite complicated covariance structure. Obtaining some nice asymptotics for a statistic based either on the maximum deviation from 1 or some weighted integral of the distance from 1 therefore seems unlikely. However, it may be possible to obtain a simulation based p-value by simulating from the limiting Gaussian process.

Wednesday, 9 January 2013

Book Review of: Competing Risks and Multistate Models with R.


Ross Maller has written a book review of Beyersmann, Schumacher and Allignol's recent Springer book on Competing Risks and Multistate Models with R, published in Australian & NZ Journal of Statistics. This is primarily a rant against the cause-specific hazard approach to modelling competing risks. For instance cause specific hazards "do not explicitly take into account the obvious mixture of distributions inherent in the data." Moreover, the fact that assuming proportionality in cause-specific-hazards (CSHs) can lead to non-proportional, even crossing, relationships for cumulative incidence functions (CIFs), is painted as a terminal weakness to the approach.

Maller's main contribution to survival analysis is through models for cure fractions (see e.g. Maller and Zhou 1995) an approach that he is evidently very taken with. Apparently the correct approach to take in modelling competing risks data is to assume a finite mixture model, such that individuals in a particular class are only at risk of one particular failure rate. Moreover, the problem of covariates is claimed to be entirely solved by allowing proportional hazards within failure types, which Maller says is the approach taken by Fine and Gray (1999).

The entire nature of survival and event history analysis is in modelling the dynamics of the process. In most circumstances it is much more useful to be able to describe the process at time t given no event has occurred by time t than to describe the process conditional on a latent class membership. Moreover, in the vast majority of competing risks data, at least in medical contexts, all patients are at risk of all event types until experiencing an event. A mixture model could therefore only ever be viewed as a mathematical convenience. The fact that in practice a CSH method is actually substantially more convenient, particularly if a non- or semi-parametric approach is to be adopted, hardly aids the case for mixture models.

Maller is also misrepresenting the Fine-Gray approach which does not assume proportional hazards within failure types. The Larson-Dinse (1985) paper that Maller also cites does involves that approach. But that can lead to the same crossing cumulative incidence curves Maller takes issue with in the context of CSH. Fine-Gray assumes proportionality of the sub-distribution hazard for a particular cause. This does allow proportionality for that cause's corresponding CIF but, as a consequence, is unable to provide a covariate model for other CIFs that is guaranteed to lead to a feasible set of CIFs for all covariate values (ie. we can fit a Fine-Gray model to each cause of failure but the resulting models will be contradictory).

Fundamentally, whatever model that is postulated, we can find the implied cause-specific hazards. Assuming proportionality of the cause-specific hazards is obviously only a modelling assumption but in nearly all cases it will be a better starting point than assuming the existence of cure fractions.