As VP of Operations at BairesDev, Damian oversees the entire customer relations life-cycle, safeguarding the company’s operations.
Every day it seems like artificial intelligence (AI) plays a more important role in our lives. From the work we do to the content we consume, we constantly run into algorithms designed to make our lives easier in some way. However, it isn’t difficult for a well-intentioned AI tool to inadvertently do more harm than good. How? By relying on and amplifying unfair biases.
Yes, computer algorithms can be (and often are) biased, and these biases can have a significant impact on people. Case in point: In 2019, researchers discovered how a medical algorithm widely used by U.S. healthcare institutions unintentionally discriminated against Black patients. Long story short, the bias originated in assuming causality.
Assuming Causality Is A Dangerous Game
Building fair bias into AI technology isn’t easy. Engineers know that there’s always a chance their algorithms will work in unexpected ways. Beyond that, assuming causality is simply part of human nature. After all, that’s how we learned to run from predators, prevent illnesses and build shelters. But today, wrongful correlation can be extremely harmful in high-stakes domains like healthcare and justice.
A lack of understanding of underlying social systems can lead to algorithmic designs that oversimplify our imperfect reality and solidify problems we’re trying to solve. That said, causation-correlation mistakes are made all the time — and deep learning computing has to deal with billions of opportunities to make these mistakes as it analyzes massive sets of data.
What We Can And Must Do
Overcoming unfair biases in AI starts by embracing the fact that they exist and doing something about it. And, for that, we must rethink the questions we ask ourselves. While a question like “Will automating this process help us achieve this goal in the next quarter?” is perfectly good for determining if creating an algorithm will be worth it, it isn’t enough to actually drive the technology’s development goals.
Once development starts, we need to introduce the human factor into play. As such, data scientists must understand the underlying variables that the algorithm will analyze, how it will process them and how the resulting datasets and models will be constructed. Thinking beyond engineering will mean getting as close as possible with the stakeholders impacted by this algorithm and taking into account the social implications of its implementation. This is a responsibility that everyone developing AI solutions needs to keep in mind (including businesses that outsource Python development from companies like ours to implement their own AI solutions).
In a recent paper, researchers argued that considering these social implications will require AI engineers to embrace societal contexts as they are: complex, nonlinear, dynamic and adaptive systems with hard-to-see feedback mechanisms. While we are all part of these systems, there is not a single person or machine that could be able to fully understand them.
Does that mean that there will always be blind spots in AI development? Most likely, yes. And our best chance to find them and patch them is to collaborate with the people closest to the societal context itself — sociologists, behavioral scientists and humanities specialists. Together, we can form a shared hypothesis of how a certain algorithm could work and how we can best guarantee win-win scenarios.
All of that must happen in the earliest stages of development. There is no point in getting started with product design if our approach is not fully rounded to include the implications of the algorithm itself. Today, AI engineering experts refer to this as community-based system dynamics (CBSD).
The CBSD approach applies systems thinking to incorporate both qualitative and quantitative methods that describe complex problems. In the end, this is all about supporting technological innovations that are fair, safe and beneficial to everyone.
A Shift In Mindset
The future of AI is being designed right now. Inclusivity and equitability are two things in which humans don’t have such a great record, and we can help change that by doing things right with AI. By taking into account the social implications of algorithmic technology, we can stop it from reflecting the unfair and unjust elements of our society and eventually nullify its effect in our technology.
And, as I said before, it’s in everyone’s hands. Every company is a tech company, and the majority of businesses will be using AI in the short term (if that hasn’t already happened by the time you’re reading this). In any case, taking this approach to AI engineering will make our systems more reliable, fueling our algorithms with better data and helping us solve problems more efficiently. This is the shift in mindset that will unlock our capability to build an equitable and fair future.
Forbes Business Council is the foremost growth and networking organization for business owners and leaders. Do I qualify?
As VP of Operations at BairesDev, Damian oversees the entire customer relations life-cycle, safeguarding the company’s operations. Read Damian Scalerandi’s full executive