PSM Alumna Launches FDA & Digital Health Blog

professional headshot of author

Haewon Park, PhD, received a certification in Pharmaceutical and Clinical Trials Management from the Rutgers Professional Science Master’s program in 2020. She has a background in geoscience research, receiving a Doctor of Philosophy from Princeton in 2008, and in regulatory affairs for medical devices. While at Rutgers, her certification helped her to launch a career transition into life sciences.

Today, her expertise focuses on FDA regulatory strategy, and she is excited to share a new 10-week blog series exploring AI, SaMD, and digital health innovations with the Rutgers Master of Business and Science (MBS) community. She is hoping to break down complex and ever-evolving FDA guidance and to offer practical insights for anyone curious about technology-driven medical devices and their regulations.

We are sharing the first installment of her blog series.


The Regulatory Paradox of AI in Healthcare — And How PCCPs Are Changing the Game

Imagine walking into a hospital where AI-powered devices are not only reading real-time patient data but also learning and evolving—refining predictions, improving accuracy, and adapting to changing clinical needs. This is not science fiction. It is the promise of AI in healthcare.

But here is the regulatory paradox:

If a device continuously learns, how do we ensure it remains safe and effective—without freezing innovation?

Historically, the FDA has treated software as medical devices (SaMD) as “locked” at the time of clearance. Any significant update, such as retraining a machine learning model, usually requires a new 510(k), De Novo, or PMA submission.

This creates a bottleneck that slows innovation and introduces risk: fixed AI models can degrade over time, particularly when deployed in new clinical environments or patient populations. In fact, fewer than 2% of FDA-cleared AI devices have been retrained on new data post-launch (Wu et al., 2024).

🔍 Enter the FDA’s Predetermined Change Control Plan (PCCP)

The PCCP framework is the FDA’s answer to this dilemma. It enables manufacturers to include a pre-approved plan for specific future changes—such as retraining algorithms or tuning performance thresholds—at the time of the original submission.

In short: You can get regulatory sign-off before your AI evolves, rather than resubmitting every time it does.

A PCCP consists of two core components:

  • SaMD Pre-Specifications (SPS)What future modifications you anticipate
  • Algorithm Change Protocol (ACP)How you will implement, test, and monitor those changes safely

🧠 What Makes a Strong PCCP?

FDA has outlined a clear structure for building a PCCP that earns trust—and avoids regulatory delays.

Here’s what you need to get right:

1. Data Collection: Representativeness First

Your AI is only as good as the data it’s trained on. The FDA expects manufacturers to proactively ensure that data sources reflect diverse, real-world populations. That means:

  • Documented demographics: age, sex, race, geography, and comorbidities
  • Clear data sources: clinical settings, home use, app-based collection
  • Transparent consent and privacy protections

✅ Tip: Include a Data Acquisition Plan in your ACP that defines your strategy for current and future data, including consent tracking and bias mitigation.

2. Data Preparation: Keep Pipelines Transparent and Controlled

Preprocessing must

  • Consistent (normalization, filtering, transformation)
  • Well-documented and version-controlled
  • Segregated between training, tuning, and testing datasets

✅ Tip: Treat your data preparation like a regulated software system. Lock the logic, track the changes, and validate updates. (GMLP Principle 7: “focus is placed on the performance of the deployed model, and its retraining methods are well understood and contolled.”)

3. Annotation: Label Accuracy You Can Defend

FDA reviews the quality of your labels and expect qualified annotators, inter-rater metrics, and adjudication workflows. Ground truth labeling is where many submissions fall apart. FDA wants to see:

  • Use of qualified clinical annotators
  • Inter-rater agreement metrics (like Cohen’s kappa)
  • An adjudication workflow for resolving discrepancies

✅ Tip: For subjective tasks (e.g., radiology or dermatology), use triple annotation with adjudication. Document every step.

4. Performance Monitoring: Beyond Accuracy

Do not just show that your model performs well overall. The FDA wants subgroup performance, retraining triggers, thresholds, and drift detection. The Final Guidance emphasizes continued monitoring of outcomes across diverse populations. Thus, manufacturers are encouraged to: 

  • Analyze performance across subgroups
  • Define retraining triggers and thresholds
  • Address model drift through real-world monitoring

✅ Tip: Create a dashboard or routine audit process that flags demographic performance drops—and connects back to your PCCP change triggers. FDA’s AI/ML Action Plan emphasizes the importance of post-market performance tracking and updates.

5. Risk Management & Governance: Integrate with QMS

Your PCCP must integrate with your Quality Management System (QMS) with SOPs for updates, traceability, and alignment with the FDA’s total product life cycle (TPLC) framework. A PCCP does not live in isolation. FDA will expect:

  • Documentation aligned with the Total Product Life Cycle (TPLC) framework
  • SOPs for AI model updates and change control
  • Traceability across datasets, code, and outputs

✅ Tip: Keep documentation audit-ready. Design your PCCP as an extension of your overall QMS.

Why It Matters

AI models are notoriously brittle when deployed in new settings. A model trained in one hospital may underperform in another due to different patient demographics, devices, or protocols.

PCCPs provide a regulatory path to enable:

  • Site-specific retraining
  • Personalized tuning
  • Adaptive performance updates

Without having to file a new submission every time.

Practical Advice for AI-SaMD Teams

  1. Embed PCCP early
    : Do not retrofit it. Start designing your product with future change control in mind.
  2. Invest in high-quality, diverse data
    : Representativeness is not just ethical—it is regulatory.
  3. Avoid vague SPS
    : Be specific about what you will change. Overly broad statements raise red flags.
  4. Engage FDA early through Q-Submission
    : Talk to FDA early. Share your PCCP scope and get alignment on your validation plans.
  5. Collaborate across teams
    : Regulatory Affairs, Data Science, Clinical, QA, and Product must all weigh in on your PCCP strategy.

Final Thought

PCCPs are not just a regulatory workaround; they are a strategic enabler for scaling safe, adaptive, AI-driven healthcare.

If you are developing a wellness app, wearable, or clinical AI tool that will evolve post-launch, your PCCP readiness is as critical as cybersecurity or risk management. If done right, it builds trust with regulators, clinicians, and patients—while giving your team the freedom to keep improving your product.

Are you working on a PCCP strategy or SaMD launch?
I’d love to hear your thoughts, share ideas, or review your approach.

Let’s connect.

------------------------

View this as a LinkedIn post on her profile. You can also read the second installment in her series, titled  Why AI SaMD Needs More Than Engineers: The Case for Multi-Disciplinary Expertise | LinkedIn.

Published on: 08/28/2025