Each and every day, doctors make some of the most important decisions of their patient’s lives. The pressure is extreme, the stakes couldn’t be higher, and often the time frame is compressed into a few seconds. A few seconds to make a choice that could be the difference between life and death.

Doctors are highly trained, of course, and a good doctor draws upon vast expertise and years of experience. Still, anyone who has ever faced a challenging medical decision and sought a second opinion knows that even doctors vary in their interpretation of information and their subsequent recommendation — a recommendation which may have huge consequences for the patient.

Often, entirely different maladies can display the same symptoms, and whether or not a doctor arrives at the correct diagnosis makes the difference between the right and wrong treatment. Diagnostic mistakes are responsible for 15 percent of errors that cause harm to patients.  It makes perfect sense, then, to search for some way to standardize medical decision making, to eliminate human error and make diagnosis objective rather than subjective. Programmers are racing towards the perfect diagnostic tool, a computer that can turn diagnosis from a delicate art to an errorless algorithm. It’s a goal well worth our time, but it comes with some very real risks.

As early as the 1970s, a team from the University of Pittsburgh was trying to solve the problem with computer power. They eventually developed Quick Medical Reference, a commercial diagnostic tool which has since been discontinued. For the past thirty some years, Massachusetts General Hospital has been working on DXplain, a program that provides a ranked list of potential diagnoses based on symptoms and lab results.

More recently, advancements in programming and artificial intelligence have opened the door for highly sophisticated tools designed to help doctors find the right diagnosis. One of the most successful currently available is Isabel, a program named for the developers’ daughter who was  misdiagnosed almost fatally at the age of three. Isabel came in with chickenpox, but her physicians failed to spot the much more dangerous necrotizing fasciitis underlying her symptoms.

Her software counterpart is programmed to avoid human bias like that of the doctors who couldn’t see beyond the obvious and easily treated chickenpox. Isabel objectively considers even small clues which might be easily disregarded by doctors eager to focus on the most dramatic symptoms. Nor does the program disregard a rare diagnosis in favor of a common one simply because it comes to mind more easily, as humans are prone to do. It has already been on the market a few years, and research suggests that it is correct 74 – 96 percent of the time, depending on how much time a doctor devotes to inputting a patient’s information.

A great deal of excitement has also sprung up around Watson, IBM’s AI superstar which won its fame defeating human Jeopardy champions. Watson is currently absorbing mass amount of data from medical textbooks and research and generating big hype that it will be “smarter” than human diagnosticians.

So can a computer really outperform a human doctor? It’s a complex question. Certainly a computer is capable of retaining more information with more precision than any human mind. Indeed, human doctors would need to spend an impossible 160 hours a week just reading to keep up with new medical science as it is published, to say nothing of thinking about that information critically and making connections and thoughtful applications. Even highly trained humans are also, undoubtedly, influenced by heuristics and cognitive traps more often than we might hope.

And yet, being a skillful doctor is about so much more than crunching the right data. The risk of all of this technology is that it can discourage doctors from analyzing patients themselves and cultivating the intuition that is crucial for their craft. Humans might not have the memory capacity as a machine, but we are experts at recognizing patterns. Sometimes a human eye is just what a patient needs. There may also be critical moments when consulting a computer program would require seconds that a patient simply doesn’t have. In those moments, you’d better hope your doctor’s thoughtful decision making skills aren’t rusty.

The intangible intuition of a highly experienced and skilled doctor may also be necessary for a program like Isabel or Watson to function at all. Even the smartest computer can only analyze the information it is given, and that means they rely first and foremost on the critical thinking of doctors to ask the right questions.

Technology provides us with powerful tools, but ultimately their success depends upon our ability to remember that they are just that: tools to support human intelligence, not replace it. We can’t afford to ignore the benefits that AI can offer, but neither can we afford to have doctors who are focused on plugging in information rather than carefully and critically thinking through the information a patient provides, drawing on a magic combination of knowledge, training, experience, and intuition that we call mindfulness.