On one cold February day in 1997, a furious man stared daggers across a chessboard at a large metal box. He looked down, desperately searching for an option that lead away from the conclusion he saw on the board. His search fell flat. The man resigned, and stormed away from the innocuous metallic box that had stolen the victory he had so assuredly expected.

The man’s name was Gary Kasparov, and he would go down in history as the Grandmaster who was soundly defeated at his passion by a supercomputer.

For some doctors today, Kasparov’s losing struggle against his AI opponent hits a little too close to home in light of the recent innovations in medical technology. Diagnostic tools such as IBM Watson and Isabel are making considerable waves in the medical community for their abilities to process and analyze incredible quantities of information. Some prominent members of the tech community even question whether the technology will one day render human doctors unnecessary.

Needless to say, that question comes uncomfortably close to pointing out the current crisis of confidence faced by doctors and patients alike.

As a cloud-integrated software, Watson can sift through databases of medical papers, patient files, and even textbooks for relevant information; a search through millions of pages, processed in a matter of seconds. The software even delivers suggestions for a differential diagnosis with conditions listed in order of probability along with their recommended treatments.

In the end, though, human doctors are charged with providing care to the patient. Watson is a tool – an invaluable one – but as I’ve written before, the software’s ability to crunch numbers and assess probabilities cannot replace a doctor’s years of experience, perception, and thoughtful decision-making. However, a doctor who utilizes Watson as a resource has the potential for tremendous success as a diagnostician.

Oddly, that potential doesn’t seem to have taken root in practice. According to Dr. Edward Hoffer, a scientist who helped create Watson’s precursor, Dxplain, cited reluctant doctors as the primary obstacle to bringing supplementary software into common practice. Their rejection was so complete that hospitals abandoned Hoffer’s technology despite seeing success during trials. Hoffer further suggested that the unenthused doctors felt the technology was unnecessary, and that they could perform just as well without the aid.

It is worth noting that the narrative surrounding aids such as Watson has been hostile to doctors. When overly enthusiastic members of the tech community describe Watson as the first step towards AI doctors, human physicians shut down. Instead of becoming an invaluable tool in a human-tech partnership, Watson becomes recontextualized as a threat to a doctor’s livelihood.

The narrative needs to be changed. Supplementary technologies such as Watson have been shown to lower misdiagnosis rates, and would undoubtedly make a positive change if widely accepted. If doctors see Watson as a tool, rather than as a threat to their authority or experience, they will likely be more open to using it; thus, proponents of the software and hospital administrators alike must work together to properly contextualize the technology.

On the other side of the issue, patients may have too little confidence in their physician’s skills, and too much in technology. A study brought to the 2017 Pediatric Academic Societies found that when given online information that seems to contradict a doctor’s diagnosis of their child’s ailment, parents were more likely to doubt the doctor and ask for a second opinion. Additionally, a study  published in the Journal of Medical Internet Research in 2015 found that patients are more likely to “prefer new technologies for a diagnosis” than their doctors.

Given our current Internet-savvy culture, more and more patients are likely to turn to WebMD and Google for a diagnosis before coming to a doctor. Therefore, more patients are likely to distrust or challenge their doctor’s findings.

This is troubling, because trust is everything for a doctor. Patients are more likely follow care directions from physicians they trust, and self-diagnosis apps undermine a doctor’s authority. By extension, self-diagnosis apps inadvertently harm long-term care plans by sabotaging a patient’s belief in their doctor’s ability.

Doctors and patients alike face a crisis of confidence about the human capability for healing others. Hesitant to trust overhyped technology, doctors turn away needed supplementary resources such as Watson while patients trust faulty apps over their trained physicians. Neither side stands on steady ground.

Progress needs to be made in the medical community, and the way forward will require a willing and thoughtful partnership between doctors, patients, and technology. Paired, human doctors and innovative technology have the potential to provide a greater quality of care to patients – and to bring the two together, the narrative surrounding medical technology has to change. So long as supplementary tools such as Watson are viewed as replacements for rather than aids to doctors, the medical community will be slow to accept common use. Similarly, real-world work has to be done to regain patient trust and educate patients on the dangers of relying on incomplete online diagnoses.

Neither patients nor their doctors are wrong in their respective trust and distrust of technology – but a balance must be found. True progress comes when human experience and technological innovation come together in the pursuit for improving medical care.