Mutual recognition methodology development
Authors: Carol A. C. Flannagan, Paul E. Green, Kathleen D. Klinich, Miriam A. Manary, András Bálint, Ulrich Sander, Bo Sui, Peter Sandqvist, Selpi, Christian Howard
Phase 1 of the Mutual Recognition Methodology Development (MRMD) project developed an approach to statistical modeling and analysis of field data to address the state of evidence relevant to mutual recognition of automotive safety regulations. Specifically, the report describes a methodology that can be used to measure evidence for the hypothesis that vehicles meeting EU safety standards would perform similarly to US - regulated vehicles in the US driving environment, and that vehicles meeting US safety standards would perform similarly to EU - regulated v ehicles in the EU driving environment. As part of the project, we assessed the availability and contents of crash datasets from the US and the EU, as well as their collective ability to support the proposed statistical methodology. The report describes a s et of three statistical approaches to “triangulate” evidence regarding similarity or differences in crash and injury risk associated with EU - and US - regulated vehicles. Approach 1, Seemingly Unrelated Regression , tests whether the models are identical and will also assess the capability of the data analysis to detect differences in the models, if differences exist. Approach 2, Consequences of Best Models , uses logistic regression to develop two separate models, one for EU risk and one for US risk, as a fun ction of a set of predictors, (i.e., crash, vehicle, and occupant conditions). The two models will then be exercised on a standard population for the EU and a standard population for the US. Approach 3, Evidence for Consequences , turns the question around to measures the overall evidence for each of a set of possible conclusions. Each conclusion is characterized by a range of relative risk on a single population. Evidence is measured using a weighted average of likelihoods for a large group of models that produce the same outcome. That evidence is t hen compared using Bayes Factors.