
Trust, Transparency, and the Essential Work of Scaling AI in Precision Oncology
Trust, Transparency, and the Essential Work of Scaling AI in Precision Oncology
The field of artificial intelligence (AI) is rapidly making inroads into cancer care, promising quicker insights, more precise treatments, and fewer missed opportunities. However, for AI to genuinely broaden access to precision oncology—not just in elite academic institutions but also in community clinics and rural hospitals—it must be developed and implemented on a solid foundation of trust and transparency.
Why This Matters Now
Precision oncology is revolutionizing cancer treatment by aligning patients with appropriate tests, therapies, and clinical trials based on the biology of their tumors and individual circumstances. Yet, access remains inconsistent. Many patients who could greatly benefit from biomarker testing or targeted therapies do not receive them, particularly in under-resourced areas. AI can help bridge these gaps by extracting evidence from extensive data, standardizing complex processes, and facilitating timely decisions. But none of this is achievable without trust—from healthcare providers, patients, healthcare systems, and regulators.
Trust isn’t just a catchy slogan. It is the result of transparent design, thorough validation, responsible governance, and reliable performance in real-world settings. The objective is straightforward: utilize AI to safely and equitably extend access to precision oncology while garnering and maintaining the trust of those who depend on it.
The Access Gap in Precision Oncology
Precision oncology depends on several crucial steps that can easily be overlooked: ordering the appropriate molecular tests, interpreting complex reports, selecting evidence-based therapies, and matching patients with trials. Research consistently reveals disparities across these critical steps:
- Rates of biomarker testing are lower in certain community settings, particularly among patients from historically underserved groups, restricting access to targeted therapies and clinical trials. The American Association for Cancer Research has documented persistent disparities in genomic testing and trial representation among racial and ethnic minorities. AACR Cancer Disparities Progress Report 2024
- While coverage and reimbursement for next-generation sequencing (NGS) have improved, they continue to be complicated. Medicare’s national coverage determination for NGS has opened important avenues; however, fluctuations across payers and indications can still hinder access. CMS NCD 90.2
- The interpretation of molecular results can vary significantly, especially in settings lacking specialized molecular tumor boards. Implementing structured decision support can help standardize care and minimize undesired variability. ESMO Open – Molecular tumor boards
AI can assist clinicians at each of these critical junctures, but not without proper safeguards. Poorly designed or opaque tools can exacerbate inequities. In contrast, robust and transparent AI can illuminate these gaps and help remediate them.
What Transparency Actually Means in Healthcare AI
Transparency extends beyond simply informing users that a tool utilizes AI. It involves delivering the correct information to the right individuals at the right moment, enabling clinical teams to utilize the tool effectively while ensuring patients can provide informed consent. Practical transparency should encompass:
- Purpose and Intended Use – Clearly define what the model is designed to accomplish and what it is not.
- Data Provenance – Explain the source of the training data, how it was collected, and how representative it is of the patient population being served.
- Performance Measures – Validate metrics across multiple sites and diverse populations, including subgroup performance based on race, ethnicity, age, sex, and other relevant factors.
- Limitations and Failure Modes – Identify known situations where performance may decline and communicate how users will be alerted.
- Update Plans – Clarify what will change over time and how these alterations will be tested and communicated.
Several frameworks inform this initiative. The National Institute of Standards and Technology’s AI Risk Management Framework provides a standardized language for trustworthy AI, emphasizing transparency, accountability, and fairness. NIST AI RMF 1.0 The World Health Organization similarly advocates for clear regulatory expectations for AI in health, including documentation, post-deployment monitoring, and human oversight. WHO – Regulatory considerations for AI in health
Regulatory Guardrails Are Taking Shape
As AI becomes integrated into clinical software and devices, regulatory frameworks are evolving to keep pace with the technology:
- FDA – In 2024, the FDA released final guidance on Predetermined Change Control Plans for machine learning-enabled medical device software, clarifying how developers can implement model updates post-clearance while maintaining safety and efficacy. FDA PCCP GuidanceThe agency also endorses Good Machine Learning Practice for healthcare AI. FDA GMLP
- ONC – The Office of the National Coordinator for Health IT mandates transparency for certain algorithms embedded in certified health IT. The HTI-1 final rule specifies the information required to be shared with users about decision support interventions. ONC HTI-1
- Global Context – The European Union’s AI Act introduces graduated risk requirements for AI systems, necessitating transparency and risk management for high-risk applications, including many healthcare uses. EU AI Act
These regulatory frameworks do not replace the need for local governance. Healthcare systems must still establish practical oversight, maintain model inventories, and create clear processes for evaluation, deployment, and monitoring.
From Pilot to Practice – Building Clinician and Patient Trust
Clinicians lack the time for opaque black boxes. Trust is built when AI tools are tailored for real workflows and establish credibility through consistent performance.
- Make the Invisible Visible – Provide evidence behind each recommendation, including links to guidelines and citations. Where uncertainty is high, present confidence estimates or data quality flags.
- Start with Narrow, Well-Defined Use Cases – For instance, begin by triaging which patients with advanced lung cancer require reflex testing for actionable biomarkers based on guideline criteria. Limit the scope, measure outcomes, and expand responsibly.
- Evaluate Prospectively – Silent or side-by-side deployments clarify impact before the tool influences care, followed by a transition to monitored use with appropriate guardrails.
- Invest in Training and Feedback – Rapid-feedback loops in tumor boards or clinic huddles help teams adjust trust levels and identify issues swiftly.
- Design for Community Settings – Tools that function only in pristine data environments will falter where access gaps are most pronounced. Develop solutions that accommodate messy data, variable connectivity, and diverse electronic health records (EHRs).
Trust is built in the workflow—by being accurate when it matters, transparent about limitations, and helpful without being intrusive.
Data Quality and Bias – Critical Issues at Hand
Precision oncology data are intricate. Electronic health records, pathology reports, imaging, genomic profiles, social determinants of health, and claims each contribute unique pieces. Each source carries noise, gaps, and bias. To develop equitable AI, teams should:
- Document Representativeness – Disclose the demographic and clinical characteristics of training and test sets. Emphasize underrepresented groups and the implications for performance.
- Test for Subgroup Performance – Report performance by race, ethnicity, sex, age, socioeconomic status, language, and geography where feasible. Make subgroup results visible in model cards and user dashboards.
- Monitor for Drift – Cancer treatments, testing practices, and patient populations evolve over time. Regular monitoring is essential to detect data drift and recalibrate models safely.
- Utilize Federated and Privacy-Preserving Methods – These approaches can diversify training data without centralizing sensitive information but necessitate careful governance.
- Plan for Missing Data and Label Noise – Develop robust pipelines that effectively handle unstructured text, changing nomenclatures, and incomplete documentation.
There is now compelling evidence that algorithms can encode and exacerbate health inequities without active measures to address bias. One significant study revealed that a widely-used risk algorithm underestimated the needs of Black patients by using healthcare costs as a proxy for illness severity. The lesson is paramount: select labels and objectives with care. Obermeyer et al., Science
Broader and more inclusive data collaborations can facilitate this effort. The NIH All of Us Research Program is cultivating a diverse research cohort aimed at supporting more equitable discovery. NIH All of Us Initiatives like this, combined with transparency in reporting and ongoing monitoring, are critical to prevent AI from sidelining patients.
Explainability, Uncertainty, and Human Factors
Not every functional model is easily explainable, and not all explanations are pertinent. In high-stakes care, clinicians require clear and actionable insights to make informed decisions:
- Clarify the Rationale – Provide logic aligned with guidelines and key evidence. Post-hoc explanations should supplement, not replace, clinically-rooted reasoning.
- Quantify Uncertainty – Clearly present confidence levels or prediction intervals where applicable. Indicate instances when the model operates outside its training distribution.
- Simplify Processes – Present summaries, alerts, and next-best actions in the environments where clinicians already work. Minimize clicks, jargon, and cognitive strain.
- Design for Collaborative Teams – Oncology decisions are inherently multidisciplinary. Ensure outputs are easily shareable across tumor boards, pharmacy, radiology, and navigation teams.
Clinical AI reporting guidelines, such as SPIRIT-AI and CONSORT-AI, can assist teams in planning and evaluating trials, defining specific requirements for describing model development, assessment, and integration. SPIRIT-AI CONSORT-AI
Workflows That Expand Access, Not Just Insight
AI will only impact equity metrics if it is integrated into care delivery. Examples that can expand access include:
- Biomarker Testing Eligibility Prompts – Automated checks against guideline criteria to minimize missed testing opportunities at diagnosis or disease progression.
- Report Simplification – Transforming complex genomic reports into clear, standardized summaries with plain-language explanations for patients and actionable steps for clinicians.
- Virtual Molecular Tumor Boards – AI-supported case packages that compile data, draft rationales, and highlight relevant trials, enabling community sites to consistently access expertise. ESMO Open
- Trial Matching and Navigation – Tools that parse eligibility criteria and EHR data to prioritize feasible, nearby trials, coupled with navigator workflows to ensure follow-through.
- Population Health Surveillance – Identifying cohorts at risk of care gaps (e.g., patients with stage IV NSCLC who lack documented biomarker testing) and facilitating outreach and care coordination.
These are not futuristic concepts; they are practical steps that, when developed with robust governance, can enable more patients to receive the appropriate tests and therapies promptly.
Privacy, Security, and Consent
Precision oncology data are highly sensitive. Building trust requires stringent privacy protections and transparent communication about data usage.
- Guard Against Data Leakage – Strategies for tracking technologies, model inversion, and re-identification risks need robust defenses. The HHS Office for Civil Rights warns against inappropriate sharing via online tracking tools. HHS OCR – Online Tracking
- Embrace Privacy-Preserving Techniques – De-identification, differential privacy, and federated learning can mitigate risk, though they do not eliminate it. Be explicit about these trade-offs.
- Informed Consent – Use plain-language explanations and patient-friendly model cards to clarify how individual data contribute to care and research.
- Security by Design – Incorporate regular threat modeling, penetration testing, and incident response planning into any AI-enabled clinical system.
Measuring Impact – Outcomes, Equity, and ROI
Access is not an abstract concept; it can be quantified and improved. For AI in precision oncology, it is essential to define and monitor specific metrics:
- Process Measures – Biomarker testing rates, turnaround times, time to treatment initiation, and prior authorization cycle durations.
- Equity Measures – Testing and therapy access stratified by race, ethnicity, language, geography, insurance types, and care settings.
- Clinical Outcomes – Progression-free and overall survival rates, treatment adherence, and patient-reported outcomes when applicable.
- Operational and Financial Outcomes – Savings from reduced duplication of tests, fewer denials, time saved for clinicians, and minimized unwarranted variance.
When incentives align, access improves. For instance, Medicare’s coverage of certain NGS tests streamlined pathways for advanced cancer patients to receive testing, which also encouraged enhanced infrastructure investment. CMS NCD 90.2 AI can help realize the value of that coverage by ensuring eligible patients are promptly identified and tested.
A Practical Trust Checklist for Health Systems
Utilize this quick-start guide to navigate AI deployments aimed at broadening access to precision oncology:
For Clinical Leaders
- Select a narrow, impactful use case directly linked to a measurable care gap.
- Insist on transparent documentation – including model cards, data provenance, subgroup performance, intended use, and update plans.
- Establish pre-launch metrics and monitoring strategies, including equity measures.
- Implement a time-limited pilot with defined success criteria and a rollback plan.
For Data and Product Teams
- Create robust pipelines capable of managing unstructured clinical text and variable coding.
- Equip the product for continuous evaluation, drift detection, and alerting mechanisms.
- Integrate with EHR workflows and care navigation tools to ensure continuity.
- Adopt privacy-conscious design and strong security protocols from the outset.
For Compliance and Governance
- Form an oversight committee for AI that includes clinical, data, legal, and patient representatives.
- Maintain a model inventory detailing intended use, risk rating, validation evidence, and ownership.
- Align tools with relevant regulations and standards—such as FDA, ONC HTI-1, NIST AI RMF, and internal policies.
- Mandate third-party evaluations for high-risk tools, especially those that influence diagnostics or therapy choices.
Common Pitfalls – And How to Avoid Them
- Deploying AI without clear workflow management – Designate a clinical champion and a product owner; without accountability, even exceptional tools can fail to scale.
- Equating explainability with accuracy – Explanations can mislead; always back them with rigorous validation and uncertainty metrics.
- Neglecting data drift – Schedule periodic re-validation; oncology rapidly evolves, and your model must adapt accordingly.
- Underestimating change management resources – Training, communication, and support are essential and should not be seen as optional components.
- Measuring solely average performance – Mean outcomes can obscure subgroup disparities; always assess equity.
Looking Ahead – AI as a Force Multiplier for Equity
When utilized effectively, AI can significantly promote equity in precision oncology:
- Standardizing the application of evidence, reducing the impact of geographic and resource disparities.
- Extending expert workflows from academic centers to community clinics.
- Identifying and addressing care gaps in near real-time.
- Supporting patients by providing clear information and navigation assistance.
However, the promise of AI hinges on establishing trust and transparency. This requires accountability for performance, honesty regarding limitations, and engaging both clinicians and patients as partners. With the right safeguards, AI can help ensure that precision oncology becomes a standard for many rather than a luxury for a select few.
FAQs
1) What Makes an AI Tool in Oncology Trustworthy?
A trustworthy AI tool is characterized by clear intended use, strong validation across diverse populations, transparent documentation (including subgroup performance and limitations), robust privacy and security measures, and consistent real-world monitoring.
2) Can AI Reduce Disparities in Precision Oncology?
Yes, but only if equity is an explicit design goal. This necessitates the use of representative data, subgroup evaluations, human-centered workflows, and metrics that track access and outcomes across different populations. Otherwise, AI can exacerbate existing disparities.
3) How Do Regulations Affect AI Used in Cancer Care?
Depending on the tool’s functionality, it may be subject to FDA oversight as a medical device, particularly if it influences diagnosis or therapy. Additionally, ONC sets transparency expectations for algorithms in certified EHRs. These frameworks help ensure safety and accountability.
4) Do Clinicians Need to Understand How the Model Works Internally?
While they don’t need to read the code, clinicians require actionable transparency regarding the tool’s function, its accuracy for similar patients, limitations, and update frequency.
5) What Are the First Steps for a Health System Getting Started?
Identify a high-impact, narrow use case addressing a documented care gap (e.g., missed biomarker testing). Set up governance, outline success metrics including equity measures, pilot in a few clinics while providing strong training and feedback, and then scale carefully.
Sources
- AJMC – Trust and Transparency Key for Leveraging AI to Expand Access to Precision Oncology
- AACR Cancer Disparities Progress Report 2024
- CMS National Coverage Determination 90.2 – Next Generation Sequencing
- ESMO Open – Molecular tumor boards: real-world implementation
- NIST AI Risk Management Framework 1.0
- WHO – Regulatory considerations for AI in health
- FDA – Predetermined Change Control Plan Guidance for ML-enabled devices
- FDA – Good Machine Learning Practice
- ONC – HTI-1 Final Rule
- European Parliament – EU AI Act overview
- HHS OCR – Guidance on online tracking technologies
- NIH All of Us Research Program
- BMJ – SPIRIT-AI reporting guideline
- BMJ – CONSORT-AI reporting guideline
- Science – Dissecting racial bias in an algorithm used to manage the health of populations
Thank You for Reading this Blog and See You Soon! 🙏 👋
Let's connect 🚀
Latest Insights
Deep dives into AI, Engineering, and the Future of Tech.

I Tried 5 AI Browsers So You Don’t Have To: Here’s What Actually Works in 2025
I explored 5 AI browsers—Chrome Gemini, Edge Copilot, ChatGPT Atlas, Comet, and Dia—to find out what works. Here are insights, advantages, and safety recommendations.
Read Article


