Machine Learning: The Jeepers Creepers?

Tapish Panwar
4 min readJun 25, 2023

The wonders of Machine Learning do not come without a price. In this article I discuss some of the key risks and challenges associated with Machine learning as presented by Ethem Alpaydin in his MIT Press Essential Knowledge Series book on Machine Learning.

Ethem Alpaydin,
Professor, Computer Science, Özyeğin University
Member, The Science Academy, Turkey

The MIT press essential knowledge series offers accessible and concise pocket size books on topics of current interest. These are quick reference guides to some of the most burning subject areas in the day. I picked up 'Machine Learning by Ethem Alpaydin' and was impressed with the easy narration that helps us take small bites of some of the key concepts of Machine Learning.

I wanted to summarize a few scenarios and how to manage them wrt the risks associated with Machine Learning (read AI which uses ML and beyond) as presented in the book.

Content of 'Machine Learning' by Ethem Alpaydin

But before that, putting down an excerpt from the book as it is, on why we may not after all need regulations to be built from scratch to keep a check on it.

"....It’s a crime to kill a person whether one uses a rock or a drone. The high court of England and Wales stated in 2019, that the fact that technology is new does not mean that it is outside the scope of existing regulation, or that it is always necessary to create a bespoke legal Framework for it."

And now, the risks:

  1. Data privacy and Security: The basic requirement in data privacy is that the user who generates the data should always know what and how much data is collected, what part of this data gets stored, whether the data will be analysed for any purpose, and if so, what that purpose is. Basically, individuals should have complete control over their data and how it is used. They should also be able to ask for correction in the data collected as well as complete removal whenever they want.
  2. Bias: A good learning model depends on how good the training data is. An existing bias in the training data will lead to bias in the learning model. For example, if a face recognition training data set contains more white faces then those of other races, it will perform poorly on races that are under-represented. Thus, all efforts must be made to ensure that the training data is evenly representative and unbiased.
  3. Model Interpretability: Validation and verification of software systems making decisions for us should be an inbuilt, continuous, and rigorous process. This is to make.enshre that the system does what it should be doing every single time. Also, we want our learning models to provide not only an output decision, but also an explanation as to how the decision was made.
  4. Ethical, Legal and other Social Aspects: Artificial intelligence systems must guarantee transparency, fairness, Interpretation, accountability and most importantly safety. Further, machines must also be trained to make decisions at run time for catastrophic situations. While it may be difficult to answer questions related to these situations normally nevertheless it has to be done. For example, what should an autonomous car do when a child suddenly runs across the road. If the car is so fast that it cannot stop but it can still steer to the right to avoid hitting the child. But what if the child’s mother is standing to the right of the road? How should the car decide whether to go ahead and hit the child or steer right and hit the mother? What if the car could steer to the left and jump off the cliff, thus saving both the child and the mother, but possibly killing you? Can a driving software be allowed to take factors like age, or gender into account in making a decision? These may look like touch calls to make, but an autonomous car system must be fed with the data to make a decision at such a time.
  5. Pattern recognition Technology such as face and speech recognition can be used for mass surveillance by good and bad actors both, often without the consent of the involved. Robots getting better at killing with more data(just saw a news report of an AI based robo dog with an installed flame thrower on it! Yes, it is happening).

All of these are not dooms day out calling, but potent risks involved and mitigation of these risks must keep pace with the evolution of machine learning. Let the future be much more exciting, and much less scary.

Machine Learning by Ethem Alpaydin

--

--