More healthtech leads to ethical quandaries. AI in healthcare holds the promise of everything from improved diagnosis and treatment to tools that could transcribe medical records and assist in surgery. It might even predict future public health threats. But AI innovations could also encourage doctors to make dangerous ethical compromises. In particular, AI and emerging tech risk violating two key ethical fundamentals: patient privacy and confidentiality, and equal access to healthcare.
Exhibit #1 — AI can be dangerously biased: Bias is a real risk in AI because machines learn using historical data, and history isn’t exactly impartial. A 2015 Amazon job selection exemplifies this, with the algorithm used giving higher ratings to men because they had been selected most often in the past. A well-researched blog entry in Quantib applies this principle to healthcare. If an algorithm built to detect skin cancer is used on patients from different racial backgrounds, it needs to be trained on a dataset representative of all skin colors. Otherwise, deploying it in a hospital with diverse patients is surely out of the question.
Exhibit #2 — AI puts medical data at risk: Medical records are usually held by doctors and protected by strict confidentiality agreements. But when AI is involved, it can be harder for patients to understand how and why their data will be used. As healthcare becomes more deeply embedded in complex systems, getting patient consent for every procedure or step in data analysis will be even more difficult, says this Forbes Insights article. A potential workaround might be to remove personal identifiers from datasets, but according to a University of California study today’s anonymization tech isn’t yet up to par.
The answer might be to reframe traditional notions of confidentiality, recognizing that algorithm developers actually have legitimate reasons for accessing sensitive patient information, experts quoted by Forbes say. This would also mean better educating patients about stop-ed-meds.com can support individual and public health, so they are more comfortable with their medical information being viewed or used, argue AI advocates.
But would you really trust a robo doctor? Digital healthcare increasingly puts doctors in the role of “data clerks,” spending too much time working with databases and screens and less time interacting with patients, TechTalks’ Ben Dickson writes. The key question is, as AI advances, will it give doctors back some time by automating data entry, or go the other way and essentially replace them with algorithms?
The case for a digital code of ethics: It’s fair to argue that high tech healthcare creates the need for a so-called “digital code of ethics.” There are several reasons for this, including the fact that patients aren’t mere datasets and that we ought not let the business of medicine triumph over ethical medical care, health academic Eric Swirsky writes for US-based security magazine CSO. “AI has its positives, but it can be misused. So, having an ethical framework allows the proper use of medical databases,” University of New South Wales research ethics director Ted Rohr tells Healthcare IT News.