Wikipedia defines Artificial Intelligence (AI) as “… perceiving, synthesizing, and inferring information—demonstrated by machines, as opposed to intelligence displayed by humans … [including] speech recognition, computer vision, translation between languages, as well as other mappings of inputs.”
Currently, AI is reported as either dangerous, useful or just silly. Some reports and media posts suggest AI will probably soon outsmart humans and declare war on us. Although that might be possible, how is it likely to happen outside science fiction novels?
Instead of fear or excess hope (two extremes) what about looking at AI a just another arrow in our quiver? A tool? Machines generally have the power to excel at tasks by perfecting human skills and multiplying them. AI could help us do tasks in a day that would commonly take months or years for a team of humans. Evaluating research papers is an area where AI shines today, and can still improve.
In a 2023 article in the New England Journal of Medicine*, “Hamdy and colleagues found that prostate cancer-specific mortality was low in men with localized prostate cancer regardless of the treatment assigned.” The AI evaluated and combined data from several published studies of more than 1,600 men, ages of 50 to 69, for a median time of 15 years. Just locating those kinds of studies would be a major task, not even considering how to gather the disparate data into meaningful observations and probable conclusions. AI received study parameters and it was up to the task.
Patients were divided into three groups: Active Monitoring, Prostatectomy, or Radiotherapy. “The number of deaths from prostate cancer or any cause were similar between the three treatment groups.” While it might be possible that the choice of treatment is irrelevant, do the results actually define that there are no differences between the three groups?
Here’s an example of how AI could be set to a research task. It’s not an evaluation of prostate cancer treatments. AI might be accurate and it’s possible that humans could draw some useful information by concluding that the AI project actually disclosed some clinical facts. It could be explained as an error in how the research was designed and fed to the AI “machine.”
There are numerous variables that were not considered for each patient. Any co-morbidity easily could skew the outcomes. A seriously ill person, for example, is likely to die sooner than later whether or not he also had prostate cancer. Poor diet, lack of exercise, addictions, and risky life choices can negatively impact a person’s health and mortality.
Health choices are always personal and there should never be a time where a machine evaluation is depended upon for those choices. Regardless of the number of variables included in an AI study, the machine will never scratch the surface of the almost infinite number of factors that come into play with every decision a human makes, including factors that we aren’t even aware of, those placed in us in our past by situations that colored our total concept of self. AI, when examined as current proponents prefer, will be subject to “hidden” factors that could have an impact on its decision-making – much the same way past experiences and emotions impact a human’s decisions today.
A critical lesson to draw from the above discussion is to practice being alert (awake) to the potential for harm when a person shifts their trust to some place other than self, whether it’s another person, a committee, an insurance policy, a government, or an AI machine algorithm. A key to understanding AI, and its potential, is that it’s not human. It is a make-believe version of a human. Every real human is more than the sum of their parts. If that changes it is truly too late.
No, AI will not take control of anything important, especially our health.
Larry Frieders is a pharmacist in Aurora who had a book published, The Undruggist: Book One, A Tale of Modern Apothecary and Wellness. He can be reached at thecompounder.com/ask-larry or www.facebook.com/thecompounder.