Back to the complete issue
Wednesday, 3 March 2021

Why we aren’t all taking our orders from Skynet right now

Our growing dependence on machine learning AI comes with its own set of risks: Machine learning AI software is constantly being boasted as the technology of the future that will allow cars to drive themselves, help us make smarter investments, and yield more efficient and accurate healthcare services.

But the introduction of these programs are sometimes beset with problems: Self-driving cars sometimes get into accidents, AI-driven investments turn losses, and incorrect diagnoses are given, underlining the pitfalls of AI’s ability to independently make increasingly complex decisions.

What makes machine learning go wrong? The likelihood of errors depends on a lot of factors, including the amount and quality of the data used to train the algorithms, the specific type of machine-learning method chosen, and the algorithm system, according to the Harvard Business Review. It also depends on the environment in which machine learning was created and operates. For example, if a machine-learning algorithm for stock trading has been trained using data only from a period of low market volatility and high economic growth, it may not perform well when the economy enters a recession because the data isn’t reflective of current conditions.

Not to mention the ethical dilemmas: When given the freedom of making autonomous decisions, machine-learning AI can sometimes be put in situations with a moral dilemma where they might not make the “ethical decision” based on their own algorithms. This could include discriminating against certain genders or races when giving a loan, or self-driving cars needing to decide whether to compromise a vehicle in the next lane or a person on the street.
.

Does that mean we should reel it in? One possible way to control the level of learning an AI program delves into is to introduce only tested and locked versions at intervals as opposed to letting it constantly evolve — that or risk another Hitler Twitter bot. The US FDA decided to take that route and has typically only approved medical softwares with locked algorithms. However, evidence is now showing that these reeled in softwares are just as risky. Locked-algorithm AI doesn’t necessarily lower the occurrence of inaccurate decisions, environment-bound data, or moral risks.

So what is the solution? We’re still pooling ideas: A group of AI researchers at Apple, Amazon, Google, Facebook, IBM, and Microsoft created the Partnership on AI (PAI) in 2016 to develop best practices on AI, improve public understanding of the technology, and reduce potential harm of AI, reports Tech Talk. Among their ventures, PAI created the AI Incident Database which is a repository of documented failures of AI systems in the real world. The database aims to help relevant parties benefit from the past incidents. Engineers can use the database to find out the possible harms their AI systems can cause while managers and risk officers can determine what the right requirements are for a program they are developing.

Enterprise is a daily publication of Enterprise Ventures LLC, an Egyptian limited liability company (commercial register 83594), and a subsidiary of Inktank Communications. Summaries are intended for guidance only and are provided on an as-is basis; kindly refer to the source article in its original language prior to undertaking any action. Neither Enterprise Ventures nor its staff assume any responsibility or liability for the accuracy of the information contained in this publication, whether in the form of summaries or analysis. © 2022 Enterprise Ventures LLC.

Enterprise is available without charge thanks to the generous support of EFG Hermes (tax ID: 200-178-385), the leading financial services corporation in frontier emerging markets; SODIC (tax ID: 212-168-002), a leading Egyptian real estate developer; SomaBay (tax ID: 204-903-300), our Red Sea holiday partner; Infinity (tax ID: 474-939-359), the ultimate way to power cities, industries, and homes directly from nature right here in Egypt; CIRA (tax ID: 200-069-608), the leading providers of K-12 and higher level education in Egypt; Orascom Construction (tax ID: 229-988-806), the leading construction and engineering company building infrastructure in Egypt and abroad; Moharram & Partners (tax ID: 616-112-459), the leading public policy and government affairs partner; Palm Hills Developments (tax ID: 432-737-014), a leading developer of commercial and residential properties; Etisalat Misr (tax ID: 235-071-579), the leading telecoms provider in Egypt; and Industrial Development Group (IDG) (tax ID:266-965-253), the leading builder of industrial parks in Egypt.