Oncoscope is now available on iPad
Request free access to get started

Why AI’s Biggest Blind Spot In Pharma Isn’t Technical

Why AI’s Biggest Blind Spot In Pharma Isn’t Technical by Anna Forsythe for Forbes

This article was originally published by Anna Forsythe in Forbes on 03 March 2026. I spend a lot of time working with artificial intelligence, and I am constantly struck by a contradiction: On one hand, AI has become remarkably fluent. It can summarize dense material, surface insights from massive datasets and produce text that looks confident enough to pass for expertise. On the other hand, in some of the places where accuracy matters most, AI remains almost entirely clueless about real decision making. Pharmaceutical evidence generation is one of those places. A Systemic Blind Spot Despite years of excitement about AI in healthcare, there are still no widely used systems that autonomously maintain living, submission-ready evidence for regulatory or reimbursement purposes. This is often framed as a technology gap. In my experience, it is much more than a gap; it is a systemic blind spot not easily remedied. It exposes a misunderstanding of what regulated evidence actually is and what today’s AI systems are built to do. Systematic literature reviews are not summaries. Nor are they answers to questions posed after the fact. They are strictly governed by carefully regulated processes. They begin with predefined protocols, require transparent inclusion and exclusion logic, and must withstand scrutiny long after the original analysis is complete. Regulators and payers do not just ask what conclusions were reached. They ask how those conclusions were formed, what was excluded along the way and whether the same decisions would be made again if the review were repeated. They need to understand how those decisions were made. This is where conversational AI systems struggle in ways that cannot be fixed with better prompts or larger models. Large language models are optimized for plausibility, not traceability. They are designed to produce likely responses, not to preserve the reasoning trail behind each decision. While they can describe evidence convincingly, they cannot reliably explain why one study mattered more than another, or why a marginal trial was excluded at a particular point in time. When the literature updates—as it constantly does in oncology—those inconsistencies compound. I have seen teams experiment with using chatbots to “speed up” evidence reviews, only to discover that what looks efficient at first quickly becomes indefensible under scrutiny. The problem is not that the chatbot models are unsophisticated. It is that the task itself—the special sauce behind the systematic literature review—is not generative in nature. That special sauce needs to be procedural, auditable and accountable. At the same time, the pressure to maintain living, constantly updated evidence has never been higher. Clinical data no longer arrives in neat cycles. New trials appear between guideline updates. Regulatory decisions shift comparators. Payers ask questions that did not exist when the original review was written. Static evidence simply cannot keep up. This has created a strange stalemate. Fully manual processes are too slow and resource-intensive. Fully automated ones are not trustworthy. Many organizations quietly accept the friction, even as they invest heavily in AI elsewhere. A Change In Design What ultimately breaks that stalemate is not better AI, but a different way of designing work. In regulated environments, the only approach that scales is human-in-the-loop intelligence. Machines do what they are good at—continuous surveillance, structured extraction, pattern detection—while humans retain ownership of judgment, interpretation and accountability. When designed properly, this does not slow teams down. But it does change where expertise is applied. What surprises many leaders is that this challenge is not unique to pharma. Several years ago, I had a conversation with an executive in commercial aviation who described a similar tension. Modern aircraft are astonishingly automated. They can take off, navigate complex airspace and land with minimal human input. Yet aviation has never tried to remove pilots from the cockpit. In fact, as automation has increased, pilot training has become more rigorous, not less. The reason is trust. When something goes wrong at 35,000 feet, no one accepts “the system thought it was likely” as an explanation. Automation is expected to assist, not absolve. Human oversight is not a fallback; it is part of the system’s credibility. Evidence generation works the same way. Regulators do not reject AI because it is new. They reject opacity. They expect to see where judgment was applied, and by whom. Systems that blur that boundary undermine trust, even if their outputs look impressive. What’s Holding Organizations Back? What ultimately holds organizations back from building these hybrid systems is rarely technology. It’s culture. Most companies are still organized around projects with defined endpoints, not living assets that require continuous stewardship. Evidence is treated as a document to be delivered, not an infrastructure to be maintained. AI initiatives are evaluated on novelty and visibility, not on whether they quietly reduce friction year after year. Changing that requires leadership restraint. It means resisting the temptation to deploy tools that demo well but cannot be defended later. It also means investing in governance, workflow redesign and cross-functional ownership—none of which make headlines, but all of which determine whether AI creates real value. The most important lesson I have learned working at the intersection of AI and regulated decision making is this: Fluency is not the same as reliability. The organizations that succeed with AI will not be the ones that generate the fastest answers, but the ones that can explain and stand behind those answers when it matters. In pharma, as in aviation, intelligence is only as valuable as the trust it earns. And that means human beings in the chain of command.

Pharma’s Biggest Missed AI Opportunity Is Living Evidence

Pharma’s Biggest Missed AI Opportunity Is Living Evidence

This article was originally published by Anna Forsythe in Forbes on 29 January 2026. At this year’s J.P. Morgan Healthcare Conference, the largest healthcare investment symposium in the industry, it is no surprise that artificial intelligence featured prominently across a wide range of discussions from drug discovery, target identification and molecule design to clinical trial optimization and operational efficiency. AI applications are now fully embedded in each of these core pharmaceutical R&D strategies. What was far less visible, however, was the role AI could play in the systematic evaluation of scientific literature that underpins nearly every strategic, regulatory and reimbursement decision in modern pharma—or evidence generation. This omission is notable, in fact critical, at a time when AI-assisted evidence generation represents one of the industry’s most immediate and measurable opportunities for return on AI investment. Where AI Is Being Applied Today Current AI adoption in pharma tends to focus on highly visible areas closely associated with innovation, such as accelerating discovery timelines, improving trial execution and supporting internal productivity. These use cases already demonstrate long-term value and competitive differentiation. Still, the majority of high-stakes decisions in pharma do not hinge on discovery algorithms alone. Instead, they depend, as they have for decades, on structured assessments of existing evidence about disease burden and unmet need, historical endpoints and comparator performance, safety signals and evolving standards of care. These traditional assessments inform decisions ranging from trial design and asset valuation to regulatory strategy and pricing. Despite their importance, evidence workflows remain largely manual and highly fragmented. Navigating Using Outdated Maps A useful analogy is navigation. When trying to reach a destination, no one relies on an outdated static map (remember MapQuest?) printed years ago. Roads change, traffic patterns evolve and more efficient routes emerge constantly. Modern navigation relies on GPS systems that update continuously and reroute in real time. Pharma, however, still navigates critical decisions using static evidence reviews. Systematic Literature Reviews (SLRs), which have long been the gold standard for evidence synthesis, continue to be conducted as project-based exercises. This one-off approach is expensive and time-consuming, and the results are quickly outdated as new publications appear, guidelines are revised or new therapies enter the market. Once completed, these product-based exercises often live in disconnected siloes, requiring tweaking or partial reconstruction to support the next decision In a scientific environment that evolves daily, this reliance on static evidence is an increasingly poor and outdated solution, especially at a time when living, continuously updated maps offer a cost-effective solution. Increasing Regulatory And Reimbursement Pressure The limitations of static evidence are becoming more consequential as medical reimbursement systems evolve. In the United States, Medicare price negotiations are now in their third cycle under the Inflation Reduction Act. Medicare Part B drugs in oncology, for example, once largely insulated from pricing negotiations, are now fully in scope as of 2026. Manufacturers are expected to justify pricing not only based on evidence available at launch but also relative to new comparators and changing standards of care that continuously emerge over time. In Europe, the Joint Clinical Assessments (JCA), designed to create a unified, cross-national analysis of the efficacy of new drugs, raise needs and expectations further. Companies must consider all relevant comparators across all EU member states, address multiple subpopulations and present comprehensive, transparent evidence syntheses that can withstand scrutiny across multiple jurisdictions. In both settings, evidence is no longer assessed at a single point-in-time. At a time when regulatory and reimbursement demands are continuously being re-evaluated, conventional static snapshots struggle to keep pace with these demands as they evolve. The Cost Of Fragmentation Despite this pressure, evidence generation in pharma remains highly fragmented. Different functions (R&D, regulatory, health economics, market access, commercial) often commission their own literature reviews for similar questions. Reviews are modified, repeated and localized across regions, frequently by different external vendors and internal teams. Assumptions diverge. Institutional knowledge is lost. Redundancy accumulates. That redundancy is costly. A single high-quality SLR routinely costs six figures and takes months to complete. For global organizations with large portfolios, the cumulative cost of duplicated effort is substantial. More importantly, fragmented evidence increases the risk of inconsistency at moments when alignment matters most. Why General-Purpose AI Falls Short Generative AI tools like ChatGPT and chatbots are often cited as a solution. While useful for summarization or exploration, they are not designed to produce regulatory-grade evidence. Regulatory and reimbursement decisions require predefined methods, transparent inclusion criteria, traceable citations, reproducibility and alignment with established systematic review standards. Outputs must be auditable and defensible. General-purpose AI systems prioritize fluency over traceability and cannot replace structured evidence synthesis. In low-risk settings, speed may outweigh rigor. In regulated environments, rigor is non-negotiable. The Case For Living Evidence The alternative is a shift from static reviews to living evidence. A living evidence approach treats evidence as shared infrastructure rather than as a series of isolated projects. Evidence is continuously updated as new data emerges, centrally governed, and organized by indication, population, comparator and endpoint. Updates are incremental rather than repetitive, and changes are transparent. Functionally, this mirrors how GPS systems work: always current, responsive to new information and capable of supporting multiple routes and decisions from the same underlying map. Such an approach could support better decision-making across the product life cycle, reduce duplication and improve consistency under increasing regulatory and reimbursement scrutiny. Why The Shift Has Been Slow If the potential benefits are clear, why has adoption been limited? One reason is organizational structure. Evidence budgets are typically allocated by function, by brand and by project. Living evidence, by contrast, is shared longitudinally and is cross-functional. Adoption requires investment at an enterprise level rather than ownership by a single team. Living evidence is also, by its nature, less visible than discovery breakthroughs or novel technologies. Yet visibility and return are not the same. As AI continues to reshape pharma, the most impactful opportunities may lie not only in discovering new drugs faster, but in navigating the increasingly complex evidence landscape more intelligently. In an industry under growing pressure to

Anna Forsythe at ESMO AI & Digital Oncology 2025

ESMO AI & Digital Oncology Podium Presentation

Last week, Oncoscope-AI Founder & CEO Anna Forsythe gave a podium presentation to a packed room of clinicians, researchers, and industry leaders at ESMO AI & Digital Oncology in Berlin. The subject? What Oncoscope-AI does best: Living Systematic Review Linked to Guidelines and Regulatory Approvals as a Treatment Decision Support Tool. Now you can see Anna’s full talk too: Oncoscope-AI Founder & CEO presents at ESMO AI & Digital Oncology Congress in Berlin, Germany. 12 November 2025. Oncoscope-AI: Living Systematic Review Linked to Guidelines and Regulatory Approvals as a Treatment Decision Support Tool

Why Chatbots Aren’t Enough In Oncology

Why Chatbots Aren’t Enough In Oncology

This article was originally published by Anna Forsythe in Forbes on 13 November 2025. In the fast-moving world of oncology, clinical decision making has never been more complex—or more urgent. Thousands of new cancer studies are published every month, each with findings that could alter treatment pathways or reshape guidelines. For oncologists, research teams, hospitals and payers, the challenge isn’t simply finding information—it’s finding the right information, quickly and confidently. The market is full of AI-powered tools promising help. Many rely on large language models (LLMs) and chatbot-style interfaces that offer answers in conversational form. The appeal is obvious: type in a query, get an instant response. But in oncology—where the stakes are measured in survival rates—ease of use is not enough. Why Decisions Are So Complicated Consider a patient with late-stage lung cancer whose tumor harbors a rare genetic mutation. This is the reality of modern oncology, which offers targeted therapies for specific genetic mutations. The physician must weigh the disease stage, prior therapies, co-morbidities and preferences. They must verify whether a targeted therapy exists, check FDA approvals, review guideline recommendations and explore whether a clinical trial could provide access to the latest investigational drug. This involves combing through journal articles, conference abstracts and regulatory documents—each a piece of the puzzle. There is no “one-size-fits-all” solution in an era where targeted therapies produce individualized pathways. A chatbot might return a single response based on an editorial or opinion piece it “remembers,” presenting it as definitive. The nuance—say, that another trial showed limited efficacy in heavily pre-treated patients, or that guidelines recommend a different approach after immunotherapy—can easily be lost. The Gold Standard: Systematic, Comprehensive, Expert-Vetted Medicine relies on the hierarchy of evidence. At its peak sit systematic reviews and meta-analyses—studies that evaluate and synthesize all available research. Regulatory agencies like the FDA, as well as organizations such as the American Society of Clinical Oncology (ASCO) and the National Comprehensive Cancer Network (NCCN), have long required systematic reviews as the foundation for guidelines and approvals. An effective oncology decision support tool must therefore also be systematic, with transparent, reproducible searches of all relevant research. It must be comprehensive, drawing from peer-reviewed journals, guidelines, conference abstracts and regulatory filings. It must be robust in distinguishing between high-quality randomized trials and weaker evidence. Just as importantly, it must update continuously (ideally daily) to reflect the latest research. Medical decisions based on outdated knowledge risk outdated care. Trained oncologists and other specialists can ensure the conclusions are accurate. Where Chatbots Fall Short I’ve found that even the most advanced LLMs cannot meet those criteria. Their weaknesses are structural. Built for speed and limited in transparency, chatbots rarely disclose their sources. They may omit references entirely, and without systematic searching, key studies are often missed. Their datasets often exclude recent guideline updates or pivotal conference results. Moreover, as black boxes reliant on opaque algorithms, chatbots provide no evidence grading. An editorial can appear with the same weight as a phase three trial. They may even fabricate references—so-called and largely reported on “hallucinations.” In my experiments, queries have sometimes led to outdated and false information. In one instance, a chatbot cited a non-existent study to me. Transparency of the dataset is critical, especially in a field where thousands of new studies are published each month. Using AI on an iPhone to call a taxi is convenient, but in oncology, where each decision can alter survival, these shortcomings aren’t just inconvenient; for a patient with a rare mutation, it can mean the difference between hope and harm. Beyond Oncology: A Universal Lesson The risk of relying on incomplete or unverified evidence isn’t unique to cancer care. In finance, successful portfolio managers don’t bet other people’s money on one analyst’s hunch; they use meta-analyses of market data. In aviation, flight safety depends on synthesizing thousands of reports and assessments. No pilot would fly based on a chatbot’s opinion about turbulence. In public health, vaccine rollouts depend on systematic reviews of global trial data, not a handful of preliminary studies. Across industries, convenience cannot replace rigor. The ideal system in oncology—and other data-driven fields—is an expert-driven partner that can provide trustworthy insights. The Human and AI Solution Despite certain limitations in its use within chatbots, the beauty of AI is how it can scan millions of documents in seconds, helping detect patterns and surface relevant studies. With the mountain of data produced every day, that feature is undeniably important. But human experts are needed to bring judgment, clinical context and critical thinking to the mix. I think the winning model is a living systematic literature review (SLR)—continuously updated by AI, structured through reproducible methodology and validated by experts. (Disclosure: I lead an AI-assisted oncology evidence platform this type of approach.) LLMs power today’s chatbots—but they can also hallucinate or misread complex evidence. The approach I champion still uses LLMs, but with continuous expert oversight. Every data point should be verified by trained analysts and clinicians, eliminating hallucinations and ensuring full transparency. That said, I find this hybrid model effective but demanding. It requires capital, expertise and time to build for each cancer type. And even then, people still prefer someone or something they can talk to. The future may lie in combining both approaches—a conversational chatbot connected to a rigorously curated, expert-verified database. But by working to overcome these hurdles, pharmaceutical companies, payers and healthcare networks stand to benefit as much as clinicians. Beyond oncology, systematic, AI-augmented evidence synthesis has the potential to streamline internal decision making, support value-based care initiatives, strengthen negotiations and reduce duplication across research teams. The Bottom Line AI is here to stay, and its potential in healthcare is enormous. But in oncology—and in every field where lives or livelihoods are at stake—it must be deployed with discipline. Chatbots may offer instant, conversational answers, but approachability is not the same as reliability. Anna Forsythe Anna Forsythe, pharmacist & health economist, is the Founder & CEO of Oncoscope-AI

Smarter Oncology, Faster Access: How Anna Forsythe and Oncoscope-AI Are Reengineering Cancer Care

This article was originally published in Entrepreneur on 25 September 2025. In oncology, a cascade of new trials, approvals, and guideline updates has become the norm. Yet the systems meant to translate that progress into care haven’t kept pace. Clinicians and product teams are inundated with data, but rarely is it organized to support fast, defensible decisions at the point of need. The result is often delays in care and lost time for patients. Recognizing this disconnect, Anna Forsythe, a pharmacist, health economist, and founder of Oncoscope-AI, built a solution. Motivated by personal experience, including friends and colleagues who received care that lagged behind the evidence, she fused clinical insight with commercial acumen to create a platform that supports clinical decision-making and strategic evidence planning. Her timing couldn’t be more critical. Today’s environment is split between regulatory acceleration and payer caution. Regulators increasingly offer accelerated pathways for medicines addressing unmet needs. Meanwhile, national payers demand deeply contextualized evidence before granting public reimbursement. This tension, compounded by external reference pricing and strategic launch sequencing, has led to uneven access across countries. Oncoscope-AI was designed to operate precisely in this gap between clinical urgency and regulatory rigor. For physicians, it can compress hours of literature review into seconds. By entering stage, genetic markers, and prior therapies, clinicians receive a human-reviewed, guideline-aligned summary that surfaces survival outcomes, progression metrics, toxicity data, and approval status. Each data point is linked directly to primary studies and relevant guidelines for transparency and traceability. This clinical precision is matched by commercial depth. Market access teams can define Population, Intervention, Comparator, Outcome (PICO) criteria, retrieve relevant studies, map drug availability across jurisdictions, and run simulations to model market impact and reimbursement risk. These capabilities are increasingly vital amid European Health Technology Assessment(HTA) reform, where Joint Clinical Assessments (JCAs) will standardize evidence evaluation across Member States. For qualifying products, developers must now submit evidence simultaneously to both the European Medicines Agency and the HTA secretariat, raising the bar for dossier preparation. Anticipating this evolution, Oncoscope-AI’s roadmap now integrates European regulatory guidance, reimbursement decisions, and localized guideline text. It also provides exportable, auditable evidence tables to support dossier preparation. Its simulation engine runs on a continuously updated, expert-validated dataset. This helps ensure that market models reflect current trial outcomes and regulatory activity, not static literature snapshots. Forsythe shares her observations of industry behavior. She acknowledges why companies sequence launches and manage pricing. “These are rational responses to fiscal realities and international price governance. But I believe technology can mitigate the inequities those strategies often produce,” she says. Oncoscope-AI blends trained AI with human curation. The AI scans registries, preprints, journals, and filings to surface signals at scale. Domain experts validate relevance, extract numerical endpoints, and provide regulatory context. “Physicians don’t need more reading material,” Forsythe says. “They need the timely, relevant information that is tailored to the patient in front of them.” For pharmaceutical teams, this translates into strategic preparedness. By identifying emerging comparators, simulating comparative effectiveness, and organizing evidence into auditable PICO-driven exports, companies can build stronger, timelier market access dossiers and anticipate reimbursement questions before they escalate. Industry analysts and consultancies have urged similar readiness strategies as the JCA takes effect. Users are already seeing results. Oncoscope-AI’s simulation outputs pinpoint country-level evidence gaps and shorten dossier preparation. Exportable, PICO-aligned tables and country trackers allow teams to respond the moment a guideline or reimbursement decision changes, without restarting literature reviews from scratch. It’s worth emphasizing that Forsythe frames equitable access not as a moral debate, but as a design challenge. She argues that system-level fixes, rather than focusing solely on industry behavior, will expand reach. Oncoscope-AI positions itself as a bridge between AI innovation and regulatory rigor at a time when scientific velocity often outpaces legacy workflows. The platform isn’t built for shortcuts. It’s built for readiness: an auditable, clinician-trusted channel from discovery to delivery. For Forsythe, the mission is both professional and ethical. She says, “If we want better outcomes in cancer care, we don’t need more information; we need smarter information.”

The Market Access Podcast: Will AI and Living Reviews Define the Next Era of Health Care Market Access?

Will AI and Living Reviews Define the Next Era of Health Care Market Access?

Oncoscope-AI Founder & CEO Anna Forsythe was recently on the Market Access Podcast with Dr. Stefan Walzer to discuss how Living Systematic Literature Reviews (Living SLRs) are redefining evidence generation in oncology and beyond – highlighting the power of real-time updates, advanced automation, and the essential role of human insight. Traditional SLRs are static snapshots, while Living SLRs are real-time, dynamic, and AI-powered—delivering continuously updated insights crucial for life-or-death decisions and payer evaluations. Join this discussion as they explore the myth of AI chatbots as true decision support tools, the need for actionable data over summaries, and the future of evidence synthesis, clinical decision-making, and smarter market access. Listen on Spotify: Listen on YouTube: Listen on PocketCasts:

The Danger of Imperfect AI: Incomplete Results Can Steer Cancer Patients in the Wrong Direction

The Danger of Imperfect AI: Incomplete Results Can Steer Cancer Patients in the Wrong Direction

This article was originally published in International Business Times on 09 October 2025. Cancer patients cannot wait for us to perfect chatbots or AI systems. They need reliable solutions now—and not all chatbots, at least so far, are up to the task. I often think of the dedicated and overworked oncologists I have interviewed who find themselves drowning in an ever-expanding sea of data, genomics, imaging, treatment trials, side-effect profiles, and patient co-morbidities. No human can process all of that unaided. Many physicians, in an understandable and even laudable effort to stay afloat, are turning to AI chatbots, decision-support models, and clinical-data assistants to help make sense of it all. But in oncology, the stakes are too high for blind faith in black boxes. AI tools offer incredible promise for the future, and AI-augmented decision systems can improve accuracy. One integrated AI agent increased decision accuracy from 30.3% to 87.2% compared to the baseline of the GPT-4 model. Clinical decision AI systems in oncology already assist in treatment selection, prognosis estimates, and synthesizing patient data. In England, for example, an AI tool called “C the Signs” helped boost cancer detection in GP practices from 58.7% to 66.0%. These are encouraging steps. Anything below 100 percent is not enough when life is at stake. Cancer patients cannot afford to wait for us to resolve the issues these technologies still have. We risk something far worse than delay; we risk bad decisions born from incomplete, outdated, or altogether fabricated information. One of the worst issues is “AI hallucination.” These are cases where the AI has been found to present false information, invented studies, nonexistent anatomical structures, and incorrect treatment protocols. In one shocking example, Google’s health AI misdiagnosed damage to a “basilar ganglia,” an anatomical part that doesn’t exist. The confidently presented output looked authoritative until physicians recognized the error. Recent testing of six leading models, including OpenAI and Google’s Gemini, revealed just how unreliable these systems can be in medicine. They produced confident, step-by-step explanations that looked persuasive but were riddled with errors, ranging from incomplete logic to entirely fabricated conclusions. In oncology, where every patient is an outlier, that margin of error is unacceptable. Even specialized medical chatbots, which may sound authoritative, still present opaque and untraceable reasoning—their sources inconsistent, and their statistics often meaningless. This is decision distortion. The legal and ethical implications are real. If a treatment based on AI guidance causes harm, who is liable? The physician? The hospital? The AI developer? Medical-legal frameworks are scrambling to catch up, with some warning that overreliance on AI without human oversight could itself constitute negligence. The problem of AI hallucination extends beyond the medical realm. In the legal world, AI hallucinations have already led to serious consequences: in at least seven recent cases, courts disciplined lawyers for citing fake case law generated by AI. In one high-profile case, Morgan & Morgan attorneys were sanctioned after submitting motions containing bogus citations. If courts are demanding accountability for AI mistakes in law, how long before the medical malpractice lawsuits start being filed? In oncology, especially, reliance on AI amplifies risk because of how the tools are trained. Many large language models or decision systems depend on fixed journal cohorts or curated datasets. New oncology breakthroughs may remain outside that training collection for months or years. When we query such a system, it may omit the newest trial, ignore emerging biomarkers, or default to an outmoded standard of care. When AI invents studies or hallucinates efficacy, and doctors rely on it, patients pay the price. Moreover, cutting-edge medical data is often fragmented, diversified, and non-standardized; imaging formats differ, electronic health record notes are not uniform, and rare biomarkers may exist only in supplementary data. AI does best with well-structured, consistent data; it struggles with the disorder at the frontier of research. That means decisions about novel or borderline cases may be precisely where AI is least reliable. I’m not arguing that we scrap AI in cancer care. On the contrary, we must keep developing these tools, pushing boundaries, harnessing the power of computation to spot patterns no human sees. But we must not hand over ultimate decision-making authority to them, at least not yet. Cancer patients deserve better than experiments. They deserve human physicians who remain in the loop, who audit, challenge, and interrogate AI outputs. We need an architecture of human and AI collaboration. When a chatbot suggests a regimen, the oncologist should review supporting evidence, check for newly published trials, and confirm that the model’s assumptions match the patient’s specifics. The physician must own the decision. We can establish effective guardrails by implementing regular validation of AI systems with updated clinical data. By promoting transparency in training sources and mandating human review of all AI-suggested decisions, we can enhance overall trust in these technologies. Additionally, developing clear liability rules will help ensure accountability and foster responsible innovation. In practice, that means clinics deploying AI decision tools should monitor AI output, compare outcomes, run audits, and allow physicians to override or correct AI suggestions. We must also push for standardization of data, sharing across institutions, open and timely inclusion of new studies, and rigorous mechanisms to flag contradictions or hallucinations. Without that, the models will always lag the frontier. Cancer patients cannot wait for us to achieve AI perfection. But they deserve the best possible care now, and that requires that we never quit human responsibility in the name of speed. AI must serve as an assistant, not a dictator. Humans are in charge of deliberation and decision-making, and they must always prioritize caution when faced with unverified or ambiguous algorithms. AI chatbots are tools, not authorities. When we start letting algorithms decide instead of doctors, we have crossed from medicine into potential malpractice. Cancer patients don’t need perfect chatbots. They don’t have the time for the technology to catch up, and they cannot afford doctors who make decisions based on incomplete or outdated information. For patients and their families, the stakes are too high, and they deserve a much higher standard of

OncoDaily Interview: Could Oncoscope-AI Save Clinicians Hours – and Spare Patients Side Effects?

Could OncoScope AI Save Clinicians Hours - and Spare Patients Side Effects?

It’s good to talk about our 𝘄𝗵𝘆 sometimes. That’s why we appreciate Emma Ter-Azaryan’s interview so much. Not just for her insightful questions, but for giving us an opportunity to publicly reflect on 𝘄𝗵𝗮𝘁 𝗢𝗻𝗰𝗼𝘀𝗰𝗼𝗽𝗲-𝗔𝗜 𝗺𝗲𝗮𝗻𝘀 𝘁𝗼 𝘂𝘀. In this interview with OncoDaily, Oncoscope-AI Founder & CEO Anna Forsythe shares what drives her personally, an example of an oncologist using the tool and the impact it had, and the frustration of seeing people we love treated with chemotherapy because their doctors weren’t aware of updates in the guidelines and the research behind them. You can watch the full interview “Could Oncoscope-AI Save Clinicians Hours – and Spare Patients Side Effects?” here: From OncoDaily: In this episode of OncoDaily TV, host Emma Ter-Azaryan speaks with Anna Forsythe, CEO & Founder of Oncoscope-AI, to unpack how clinicians can cut through oncology’s data overload—FDA labels, guidelines, congress abstracts, and papers—and get to the right evidence in just a few clicks. What you’ll learn: ✅ What OncoScope AI is (in simple terms): a clinician-friendly “Expedia for evidence” that pulls from major medical databases, guidelines, regulatory updates, and congress outputs—cross-linked in one place.✅ Essential vs. Edge: two workflows—patient-first decision support vs. deep-dive topic exploration (e.g., ADCs in lung cancer, mutation-specific updates).✅ Power features: clickable disease maps, filter by congress (ASCO, World Lung, etc.), tumor-board prep, and one-click prior-auth reports with citations.✅ Real-world impact: how a brand-new FDA approval surfaced that week and helped a patient access a better-tolerated therapy sooner.”

Login

Essential

Log in to the Oncoscope-AI Essential platform.

Edge

Log in to the Oncoscope-AI Edge platform.