By B.N. Frank
Artificial Intelligence (AI) technology operates using algorithms which can be lethally inaccurate and racially biased. Those reasons and more are why an “Artificial Intelligence Hall of Shame” now exists. Therefore, it seems that all kinds of problems will likely continue as long as health care providers rely heavily on A.I. to diagnose and treat patients.
From Wired:
The Alarming Blind Spots in Health Care AI
Artificial intelligence promises to make medicine smarter. But what happens when these software systems don’t work as advertised?
Artificial intelligence is everywhere. And increasingly, it’s becoming a critical part of health care. Doctors use it to try to suss out symptoms of deadly infections like sepsis; companies like Google are developing apps to help you identify ailments just by uploading some pics.
But AI is only as good as the data sets fed into these systems. And when the data sets are flawed, or the results are not properly interpreted, the software can misidentify symptoms (or fail to identify them entirely). In some cases, this may even result in false positives, or exacerbate already stark racial disparities in the health care system.
This week on Gadget Lab, WIRED senior writer Tom Simonite joins us to talk about the blind spots in medical AI and what happens when tech companies put these algorithms into their users’ hands.
Read Tom’s story about the flaws in the AI that predicts sepsis here. Read his story about Google’s new dermatology app. Read more about the racial bias in AI systems (and how those algorithms might be fixed). Also check out Lauren’s story about how the internet doesn’t let you forget.
In regard to health monitoring wearables, last year IEEE recommended that people avoid wearing both smart watches and wireless earphones unless absolutely necessary because of harmful radiation exposure. Over the years, wearers have reported burns, rashes, shocks, and other undesirable symptoms (see 1, 2, 3) from these devices. Some complaints have led to recalls. More recently, the FDA warned that exposure to smart watches as well as smartphones can affect medical implants.
While data collected by the Apple Watch may currently be considered accurate, in 2016 there was a class action lawsuit filed against Fitbit due to inaccurate heart monitor readings.
Activist Post reports regularly about A.I. and other unsafe technology. For more information, visit our archives and the following websites:
- Electromagnetic Radiation Safety
- Environmental Health Trust
- Physicians for Safe Technology
- Wireless Information Network
Become a Patron!
Or support us at SubscribeStar
Donate cryptocurrency HERE
Subscribe to Activist Post for truth, peace, and freedom news. Follow us on Telegram, SoMee, HIVE, Flote, Minds, MeWe, Twitter, Gab, Ruqqus and What Really Happened.
Provide, Protect and Profit from what’s coming! Get a free issue of Counter Markets today.
More Reasons to NOT Trust Artificial Intelligence (A.I.) or Activity Trackers in Health Care