Fit-for-Purpose or Bust: A Real-World Rubric for DHT Verification, Validation & Usability

Introduction

Future Health tracking

As digital health technologies (DHTs) reshape clinical research—from wearables that monitor mobility to smartphone apps capturing patient-reported data—the industry faces a critical question: how do we know these tools are truly fit-for-purpose?

Regulators, sponsors, and patients alike depend on assurance that DHTs produce accurate, reliable, and meaningful data. Yet, the terms verification, validation, and usability are often used interchangeably, blurring the lines between technical performance and clinical applicability.

The FDA, EMA, and MHRA have all emphasized that fit-for-purpose qualification is not a paperwork exercise—it is a risk-based, evidence-driven process that ensures digital tools are credible for their specific context of use [1–3]. This article provides a real-world rubric for evaluating DHTs across these three critical dimensions, using regulatory guidance and real case examples to bring the framework to life.

What “Fit-for-Purpose” Really Means

Consumer wearables are often marketed under “general wellness” exemptions. For example, a Fitbit that tracks steps for fitness purposes is not regulated as a medical device by FDA or EU MDR. However, when the same device is used in a clinical trial to generate endpoint data that will inform regulatory decisions, its role changes [1].

A DHT is fit-for-purpose when it is scientifically, technically, and operationally appropriate for its intended use in a specific clinical trial [1]. This goes beyond basic functionality. A wearable that accurately counts steps in healthy volunteers may fail in Parkinson’s disease patients with gait variability, rendering it unfit for endpoint generation.

The context of use—disease area, population, study design, and endpoint definition—dictates what level of verification, validation, and usability testing is required [4].

1. Verification: Proving It Works Technically

Verification answers the question: Does the device or software perform as designed?

 This stage focuses on internal engineering and quality checks. Sponsors should verify device specifications, algorithmic accuracy, data precision, and consistency across environments [5]. Verification also ensures that firmware and software updates do not compromise performance—a frequent challenge in decentralized trials using commercially available wearables [6].

Key activities:

  • Bench testing and algorithm verification.

  • Version control documentation.

  • Data transmission and integrity verification.

Example:

During the Apple Heart Study, algorithm verification was crucial. The app’s ability to detect irregular heart rhythms was validated against electrocardiogram (ECG) gold standards before enrollment began [7].

2. Validation: Confirming Clinical Relevance

Validation assesses whether the verified technology produces clinically meaningful data that align with the intended use and endpoint definition [8].

Regulators often divide validation into three dimensions:

  • Analytical validation: Does the DHT accurately measure the parameter of interest (e.g., heart rate, stride velocity)?

  • Clinical validation: Does the parameter meaningfully correlate with a clinical outcome?

  • Operational validation: Can the DHT perform reliably under trial conditions and patient diversity?

Example:

The EMA’s qualification of stride velocity 95th centile as an endpoint in Duchenne muscular dystrophy trials hinged on robust analytical and clinical validation of the wearable technology used [9].

Validation also requires fit-for-purpose statistical modeling, ensuring variability, sensitivity, and missing data handling are predefined and justified in the statistical analysis plan [10].

3. Usability: Ensuring It Works for Humans

Even the most accurate device fails if patients cannot use it correctly. Usability testing ensures that target populations can operate the DHT as intended—an essential step under both FDA and EU MDR frameworks [2,11].

Usability testing involves observing users as they interact with the technology in realistic conditions, identifying potential sources of misuse, misunderstanding, or non-compliance. In DHT-enabled trials, usability errors often cause data loss or bias.

Example:

In decentralized COVID-19 monitoring studies, sponsors discovered that participants frequently misused consumer pulse oximeters, leading to inaccurate readings. Training modules and simplified interfaces mitigated this risk [12].

Best practices for usability:

  • Include diverse patient groups in testing.

  • Provide intuitive interfaces and clear instructions.

  • Assess cognitive and physical burden on participants.

Building a Real-World Rubric for Fit-for-Purpose Qualification

Dimension Primary Question Regulatory Framework Evidence Needed Example
Verification Does the DHT perform as designed? ISO 13485 / IEC 62304 Bench tests, logs, version history Firmware accuracy testing
Validation Is the data clinically meaningful? FDA / EMA DHT Guidance Analytical + clinical validation Stride velocity endpoint
Usability Can patients use it correctly? IEC 62366-1 Usability reports, training logs Remote oximetry training

Together, these three pillars ensure that DHTs are scientifically credible, operationally feasible, and ethically sound.

Global Regulatory Alignment

  • FDA (2023): Digital Health Technologies for Remote Data Acquisition in Clinical Investigations emphasizes that sponsors must demonstrate DHT verification, validation, and usability testing before inclusion in clinical trials [1].

  • EMA: Requires evidence that DHT-derived endpoints are reliable, interpretable, and reproducible [9].

  • MHRA: Calls for “proportionate validation” and supports fit-for-purpose qualification through its Innovative Devices Access Pathway [3].

Global convergence is underway, but sponsors must tailor their documentation to regional expectations—balancing flexibility with compliance.

Common Pitfalls and Lessons Learned

  1. Assuming consumer devices are fit-for-purpose. Regulatory acceptance depends on validation, not popularity.

  2. Neglecting software version control. Firmware updates can change measurement algorithms, invalidating prior data.

  3. Underestimating usability challenges. Even minor interface issues can cause systematic data bias.

  4. Ignoring context of use. Validation in one disease area rarely translates directly to another.

Conclusion

Verification, validation, and usability are not regulatory hurdles—they are the foundation of trust in digital evidence. As DHTs become indispensable to modern clinical research, “fit-for-purpose” must evolve from buzzword to operational principle.

Sponsors that implement a structured, risk-based rubric for DHT qualification will not only ensure regulatory readiness but also gain a competitive edge. In a field where technology meets patient experience, fit-for-purpose isn’t optional—it’s survival.

References

  1. FDA.FDA. Digital health technologies for remote data acquisition in clinical investigations. Guidance. 2023 .

  2. European Commission. Regulation (EU) 2017/745 on medical devices (MDR). Brussels: EC; 2017.

  3. MHRA. Software and AI as a medical device: Change programme roadmap. London: MHRA; 2022.

  4. EFPIA. Reflection paper on integrating medical devices into medicinal product clinical trials. Brussels: EFPIA; 2025 .

  5. ISO 13485:2016. Medical devices – Quality management systems. Geneva: ISO; 2016.

  6. Bent B, Goldstein BA, Jenkins NW, et al. Investigating sources of inaccuracy in wearable sensors for clinical trials. NPJ Digit Med. 2020;3:27.

  7. Perez MV, Mahaffey KW, Hedlin H, et al. Large-scale assessment of a smartwatch to identify atrial fibrillation. N Engl J Med. 2019;381:1909–1917.

  8. Kellar E, Bornstein S, Celi LA, et al. Optimizing the use of electronic data sources in clinical trials. Ther Innov Regul Sci. 2017;51(4):408–416 .

  9. EMA. Qualification opinion: stride velocity 95th centile as secondary endpoint in Duchenne muscular dystrophy. EMA; 2019.

  10. FDA. Statistical considerations for clinical studies using digital health technology data. Draft guidance. 2023.

  11. IEC 62366-1:2015. Application of usability engineering to medical devices. Geneva: IEC; 2015.

  12. Sehrawat O, Noseworthy PA, Siontis KC, et al. Data-driven and technology-enabled trial innovations toward decentralization. Mayo Clin Proc. 2023;98(9):1404–1421 .

Next
Next

How to Survive – and Win – in the New Digital Health Economy (Part 8/8)