The Danger of Imperfect AI: Incomplete Results Can Steer Cancer Patients in the Wrong Direction

This article was originally published in International Business Times on 09 October 2025. Cancer patients cannot wait for us to perfect chatbots or AI systems. They need reliable solutions now—and not all chatbots, at least so far, are up to the task. I often think of the dedicated and overworked oncologists I have interviewed who find themselves drowning in an ever-expanding sea of data, genomics, imaging, treatment trials, side-effect profiles, and patient co-morbidities. No human can process all of that unaided. Many physicians, in an understandable and even laudable effort to stay afloat, are turning to AI chatbots, decision-support models, and clinical-data assistants to help make sense of it all. But in oncology, the stakes are too high for blind faith in black boxes. AI tools offer incredible promise for the future, and AI-augmented decision systems can improve accuracy. One integrated AI agent increased decision accuracy from 30.3% to 87.2% compared to the baseline of the GPT-4 model. Clinical decision AI systems in oncology already assist in treatment selection, prognosis estimates, and synthesizing patient data. In England, for example, an AI tool called “C the Signs” helped boost cancer detection in GP practices from 58.7% to 66.0%. These are encouraging steps. Anything below 100 percent is not enough when life is at stake. Cancer patients cannot afford to wait for us to resolve the issues these technologies still have. We risk something far worse than delay; we risk bad decisions born from incomplete, outdated, or altogether fabricated information. One of the worst issues is “AI hallucination.” These are cases where the AI has been found to present false information, invented studies, nonexistent anatomical structures, and incorrect treatment protocols. In one shocking example, Google’s health AI misdiagnosed damage to a “basilar ganglia,” an anatomical part that doesn’t exist. The confidently presented output looked authoritative until physicians recognized the error. Recent testing of six leading models, including OpenAI and Google’s Gemini, revealed just how unreliable these systems can be in medicine. They produced confident, step-by-step explanations that looked persuasive but were riddled with errors, ranging from incomplete logic to entirely fabricated conclusions. In oncology, where every patient is an outlier, that margin of error is unacceptable. Even specialized medical chatbots, which may sound authoritative, still present opaque and untraceable reasoning—their sources inconsistent, and their statistics often meaningless. This is decision distortion. The legal and ethical implications are real. If a treatment based on AI guidance causes harm, who is liable? The physician? The hospital? The AI developer? Medical-legal frameworks are scrambling to catch up, with some warning that overreliance on AI without human oversight could itself constitute negligence. The problem of AI hallucination extends beyond the medical realm. In the legal world, AI hallucinations have already led to serious consequences: in at least seven recent cases, courts disciplined lawyers for citing fake case law generated by AI. In one high-profile case, Morgan & Morgan attorneys were sanctioned after submitting motions containing bogus citations. If courts are demanding accountability for AI mistakes in law, how long before the medical malpractice lawsuits start being filed? In oncology, especially, reliance on AI amplifies risk because of how the tools are trained. Many large language models or decision systems depend on fixed journal cohorts or curated datasets. New oncology breakthroughs may remain outside that training collection for months or years. When we query such a system, it may omit the newest trial, ignore emerging biomarkers, or default to an outmoded standard of care. When AI invents studies or hallucinates efficacy, and doctors rely on it, patients pay the price. Moreover, cutting-edge medical data is often fragmented, diversified, and non-standardized; imaging formats differ, electronic health record notes are not uniform, and rare biomarkers may exist only in supplementary data. AI does best with well-structured, consistent data; it struggles with the disorder at the frontier of research. That means decisions about novel or borderline cases may be precisely where AI is least reliable. I’m not arguing that we scrap AI in cancer care. On the contrary, we must keep developing these tools, pushing boundaries, harnessing the power of computation to spot patterns no human sees. But we must not hand over ultimate decision-making authority to them, at least not yet. Cancer patients deserve better than experiments. They deserve human physicians who remain in the loop, who audit, challenge, and interrogate AI outputs. We need an architecture of human and AI collaboration. When a chatbot suggests a regimen, the oncologist should review supporting evidence, check for newly published trials, and confirm that the model’s assumptions match the patient’s specifics. The physician must own the decision. We can establish effective guardrails by implementing regular validation of AI systems with updated clinical data. By promoting transparency in training sources and mandating human review of all AI-suggested decisions, we can enhance overall trust in these technologies. Additionally, developing clear liability rules will help ensure accountability and foster responsible innovation. In practice, that means clinics deploying AI decision tools should monitor AI output, compare outcomes, run audits, and allow physicians to override or correct AI suggestions. We must also push for standardization of data, sharing across institutions, open and timely inclusion of new studies, and rigorous mechanisms to flag contradictions or hallucinations. Without that, the models will always lag the frontier. Cancer patients cannot wait for us to achieve AI perfection. But they deserve the best possible care now, and that requires that we never quit human responsibility in the name of speed. AI must serve as an assistant, not a dictator. Humans are in charge of deliberation and decision-making, and they must always prioritize caution when faced with unverified or ambiguous algorithms. AI chatbots are tools, not authorities. When we start letting algorithms decide instead of doctors, we have crossed from medicine into potential malpractice. Cancer patients don’t need perfect chatbots. They don’t have the time for the technology to catch up, and they cannot afford doctors who make decisions based on incomplete or outdated information. For patients and their families, the stakes are too high, and they deserve a much higher standard of
Anna Forsythe Built Oncoscope to Give Doctors What They Actually Need: Usable Intelligence

In today’s healthcare landscape, oncologists are drowning in data. Thousands of studies are published each month, new FDA approvals roll out regularly, and guidelines change constantly. Yet, despite this flood of information, many doctors still struggle to make the best treatment decisions for their patients. The problem is not a lack of data. The problem is a lack of usable insights. Nowhere is this more critical than in oncology, where every day and every decision can change the course of a patient’s life. Anna Forsythe, founder of Oncoscope-AI, an AI-powered oncology intelligence platform, has spent her career at the intersection of science, economics, and clinical practice. She believes the answer is not to add more data to the pile but to transform it into clarity. “We do not need to give physicians more to read,” Forsythe says. “We need to give them the right information, in real-time, that is relevant to the patient in front of them.” It is a reasonable expectation, but one that the current system sometimes fails to meet. Oncology guidelines can span hundreds of pages, often outdated by the time they reach clinical use. “While there have been recent attempts to address this problem with a chatbot approach, it is the human/AI combination that is key in achieving usability and physicians’ trust.” Forsythe’s platform, Oncoscope, solves this challenge by merging human expertise with trained artificial intelligence. It automatically reviews and organizes clinical trials, cross-referencing them with regulatory approvals and treatment guidelines. The result is a curated, reliable, and immediately usable knowledge base that oncologists can access in seconds. It works like this: a doctor inputs three basic clinical parameters—the stage of the disease, the genetic profile, and any prior treatments. In return, they receive a tailored, human-reviewed list of relevant studies, including survival outcomes and progression data, with immediate links to guidelines, approvals, and original publications. There is no need to scroll through irrelevant abstracts or search multiple databases. Everything is in one place, organized and actionable. This kind of tool is not just a convenience. It is a necessity. According to Forsythe, she was inspired to create Oncoscope after seeing people in her own life receive outdated or suboptimal cancer treatments. In one case, a friend with late-stage breast cancer was placed on chemotherapy despite the existence of a newer, more targeted therapy. The doctor had not yet seen the recent study supporting it. Forsythe found it in three clicks. “That story is not an exception,” she says. “It is happening every day. And it is not because doctors are careless. It is because the information is not delivered in a way they can use quickly.” This is a systemic flaw, and it has consequences. When oncologists default to older treatments because they cannot keep up with new evidence, patients can miss out on therapies that could extend or improve their lives. In a field where even a few months of added survival can mean everything, delays in information are delays in care. Forsythe and her team designed Oncoscope to cut out these delays. The system prioritizes the most rigorous research, flagged for relevance and clinical significance. Each entry is reviewed by experts to ensure it meets regulatory-grade standards. Doctors are not asked to interpret raw data—they are given the insights they need to make decisions now. Critically, Oncoscope is free for verified healthcare professionals. Forsythe calls it an altruistic venture, at least for now. Her goal is simple—to give doctors a tool they can trust and patients the care they deserve. “If we want better outcomes in cancer care,” she says, “we do not need more information. We need smarter information. That is what changes lives.” Beka Vinogradov is the Digital Communications Lead for Oncoscope-AI. She holds a Master’s in Health Administration and has extensive experience and education in business, marketing, and design.