Statistics Seminars: Some results on correcting for measurement error
24 March 2009 14:15 in CM107
Measurement error modeling, also called errors-in-variables-modeling, is a generic term for all situations where additional uncertainty in the variables has to be taken into account, in order to avoid severe bias in the statistical analysis. The problem is omnipresent in technical statistics, when data from imperfect measurement instruments are analyzed, as well as in biometrics, econometrics or social science, where operationalizations (surrogates) are used instead of complex theoretical constructs. After an introduction into the area of measurement error modelling, the talk discusses the power and some limitations of Nakamura's general principle of corrected score functions, mainly in the context of failure time data. Starting with classical covariate measurement error in Cox's PH model, it is shown how the Breslow likelihood can be corrected, while according to results by Stefanski and Nakamura himself no corrected score function for the partial likelihood can exist. We then turn to parametric failure time models and extend consideration to additionally error-prone lifetimes. Finally some ideas for handling Berkson-type errors (as occurring in Radon studies) and rounded errors will be sketched.
Contact firstname.lastname@example.org for more information