Why we aren’t all taking our orders from Skynet right now
Our growing dependence on machine learning AI comes with its own set of risks: Machine learning AI software is constantly being boasted as the technology of the future that will allow cars to drive themselves, help us make smarter investments, and yield more efficient and accurate healthcare services.
But the introduction of these programs are sometimes beset with problems: Self-driving cars sometimes get into accidents, AI-driven investments turn losses, and incorrect diagnoses are given, underlining the pitfalls of AI’s ability to independently make increasingly complex decisions.
What makes machine learning go wrong? The likelihood of errors depends on a lot of factors, including the amount and quality of the data used to train the algorithms, the specific type of machine-learning method chosen, and the algorithm system, according to the Harvard Business Review. It also depends on the environment in which machine learning was created and operates. For example, if a machine-learning algorithm for stock trading has been trained using data only from a period of low market volatility and high economic growth, it may not perform well when the economy enters a recession because the data isn’t reflective of current conditions.
Not to mention the ethical dilemmas: When given the freedom of making autonomous decisions, machine-learning AI can sometimes be put in situations with a moral dilemma where they might not make the “ethical decision” based on their own algorithms. This could include discriminating against certain genders or races when giving a loan, or self-driving cars needing to decide whether to compromise a vehicle in the next lane or a person on the street.
Does that mean we should reel it in? One possible way to control the level of learning an AI program delves into is to introduce only tested and locked versions at intervals as opposed to letting it constantly evolve — that or risk another Hitler Twitter bot. The US FDA decided to take that route and has typically only approved medical softwares with locked algorithms. However, evidence is now showing that these reeled in softwares are just as risky. Locked-algorithm AI doesn’t necessarily lower the occurrence of inaccurate decisions, environment-bound data, or moral risks.
So what is the solution? We’re still pooling ideas: A group of AI researchers at Apple, Amazon, Google, Facebook, IBM, and Microsoft created the Partnership on AI (PAI) in 2016 to develop best practices on AI, improve public understanding of the technology, and reduce potential harm of AI, reports Tech Talk. Among their ventures, PAI created the AI Incident Database which is a repository of documented failures of AI systems in the real world. The database aims to help relevant parties benefit from the past incidents. Engineers can use the database to find out the possible harms their AI systems can cause while managers and risk officers can determine what the right requirements are for a program they are developing.