Concerns Regarding Artificial Intelligence


Concerns regarding Artificial Intelligence (AI) are mounting, according to recent studies. This is due in part to the lack of testing on AI products already causing frightening errors to arise. IBM once boasted that its AI could “out-think cancer.” While bots are becoming increasingly smarter, can they overcome a fatal disease? AI is easily the best medical sidekick but experts say we should be concerned about them taking over doctors’ roles. The FDA claims “the potential of digital health is nothing short of revolutionary.” Meanwhile, many doctors believe the FDA is not taking the necessary precautions to come to this conclusion. It should be mentioned that the FDA does prioritize the AI products that pose the greatest risk to patients. Your primary physician’s office and surgeon have likely done the highest amount of research on their products. However, there are certain products that seemingly don’t need guidelines.

Concerns Regarding AI and FDA

The health industry is notorious for its “fail fast, fix it later” mentality. Unfortunately, that can’t be the case when it comes to diagnosing patients. Oren Etzioni of the Allen Institute said there is a “financial incentive” to ensure medical safety. “Nobody is going to be happy, including investors, if people die or are severely hurt,” he stated. One reason for concern is whether technology will work in two different systems or only one hospital. AI systems also have trouble reading some minorities over others. Further, systems “learn” to make predictions based on environmental factors instead of what is shown in a screening. Cardiologist Eric Topol says no AI products sold in the states have been tested in randomized clinical trials. These trials are perhaps the strongest source of medical evidence. Instead, the models are mainly tested on computers and not in hospitals as they’re intended. In this way, patients actually become the tests for some of these technologies without meaning to be. In fact, in 2016 Congress exempted certain types of medical software from federal review.

Are We Lowering Standards?

Concerns regarding Artificial Intelligence are growing rapidly. The equipment being used is not causing health issues, rather it’s sometimes failing to detect medical issues. One reason for this may be the allowance of “moderate-risk” products being marketed if they are similar to products that have worked in the past. Some equipment has been passed simply for showing similarities to a product from 1976. Companies will be responsible for monitoring the safety of some of their own products, and reporting it back to the FDA. High-risk products, such as software in pacemakers, will still be heavily evaluated and regulated by the FDA. We have seemingly no need to be concerned about the products that are directly affecting our bodies. However, patients want to be sure that anything used in their doctor’s office is the most reliable. Perhaps the best way to handle this is for doctors to step in and for entrepreneurs to take a step back. Artificial Intelligence has proven beneficial in small doses, but some believe it’s only a matter of time before the risks outweigh the benefits.

How do you feel about this subject. Let us know below and then call us for a full insurance coverage review. We love helping people!

Need Help?

Empower Brokerage can also help with your Medicare questions. Get an instant quote or call and speak with a licensed agent about finding the right Medicare Advantage plan in your area. 1-888-446-9157

Leave a comment

Your email address will not be published. Required fields are marked *