There is no doubt that Artificial Intelligence has reshaped completely the way we live. It has become an important part of our daily lives with image and facial recognition in the shape of apps, conversational applications like Alexa, Google Assistant, or Siri, to autonomous machines and hyper-personalized systems within the chain[1] [2] of production of every industry.
However, let’s be honest, most of the people we know do not really understand how AI works or how it processes information. As AI becomes a bigger part of our lives it is important to know how it works, so we can make them respond as expected and produce transparent explanations and reasons for the decisions they make.
There is a whole new branch of AI that studies exactly this, how AI systems make decisions and it’s called Explainable AI (XAI). They use machine learning algorithms such as decision trees and classifiers that have certain amounts of traceability and transparency. The US Defense Advanced Research Project Agency (DARPA) describes AI explainability in three parts: prediction accuracy, which means models that explain how conclusions are reached to improve future decision making, human users and operators, as well as inspection and traceability of actions undertaken by the AI systems.
Of course, not all industries and AI systems need the same amount of traceability and transparency but as AI grows in industries such as healthcare or the automotive industry with autonomous cars. The explanation of how they make decisions will impact directly our lives and XAI will become even more important.
The financial industry is a territory where AI is crucial, and we need to understand how it makes decisions and how these decisions are impacting our lives. With machine learning and AI taking decisions on credit card denials or acceptance, it is important to ensure that this happens in a safe and prudent manner. If there are data out there on you, like social media profiles, computer IP, where you buy clothes, or the way you behave with online shopping, there is probably a way to integrate that into a credit model.
Nowadays Lenders can see how trustworthy you are by analyzing simple metrics like the type of computer you use, the type of device you logged with (phone, tablet, PC), time of day you applied for credit, your email domain, and if your name is part of your email (names are a good sign). This might not be the best way to do it, but it is easier than retrieving your credit history. As machine learning and AI picks up more data of your online behavior it will become an increasing method for credit Lenders. This arouses questions like: Is it legal or ethical to discriminate against someone only by the type of computer you use or another biased data that AI can’t distinguish among?
It is more important than ever to understand how AI is actually reshaping the world and how it makes decisions for us. If we are allowing machines and artificial intelligence to create patterns, take decisions and have direct interference in our lives let’s learn about it, make it right and obtain the highest possible value
If you liked this article, subscribe to my newsletter and follow me on social media as LinkedIn, Twitter, or Instagram so you can be updated in technology, innovation, finance, fintech, entrepreneurship, and more.
Sources:
Forbes: Understanding Explainable AI
Brookings: Credit denial in the age of AI
si se refieren al proceso interno de fabricación o producción es “production chain”, pero si estamos hablando de todo el proceso industrial es “chain of production”
Queríamos hablar del proceso interno. Pero ahora que lo mencionas AI esta integrado end to end. Ajustamos.