The aim of this project was to perform an empirical evaluation of the impact of on site source data verification (SDV) on the data quality in a clinical trial database to guide an informed decision on selection of the monitoring approach.

We used data from three randomized phase III trials monitored with a combination of complete SDV or partial SDV. After database lock, individual subject data were extracted from the clinical database and subjected to post hoc complete SDV. Error rates were calculated with focus on the degree of on study monitoring and relevance and analyzed for potential impact on end points.

Data from a total of 2566 subjects including more than 3 million data fields were 100% source data verified post hoc. An overall error rate of 0.45% was found. No sites had 0% errors. 100% SDV yielded an error rate of 0.27% as compared with partial SDV having an error rate of 0.53% (P < 0.0001). Comparing partly and fully monitored subjects, minor differences were identified between variables of major importance to efficacy or safety.

The findings challenge the notion that a 0% error rate is obtainable with on site monitoring. Data indicate consistently low error rates across the three trials analyzed. The use of complete vs. partial SDV offers a marginal absolute error rate reduction of 0.26%, i.e. a need to perform complete SDV of about 370 data points to avoid one unspecified error and does not support complete SDV as a means of providing meaningful improvements in data accuracy.