Artificial Intelligence (AI) is expected to bring significant changes to the global economy, with vast potential value waiting to be tapped into as macroeconomic conditions get tougher across the world. As a result, and to meet this disruption head-on, business leaders are digging deeper – searching for innovations that allow them to deliver more with less… arming employees with the ability to act, react, and respond to disruption with the dwindling resources at their disposal.
According to the most recent PwC CEO survey, the majority of regional CEOs, specifically two-thirds, view technological disruption as a primary concern. Furthermore, 84% of the region’s CEOs plan to invest in automation, artificial intelligence, and cloud technology in 2023, while 74% intend to invest in upskilling their workforce.
So, with a global business narrative circling automating manual time-heavy tasks, what is holding back effective decision-making through AI analytics? Moreover, how can businesses ensure that – once automated – the AI-generated insights are even accurate?
Low-quality data and AI Bias: what’s the solution?
While Artificial intelligence (AI) comes in many shapes and sizes, it is, in short, a pattern recognition machine. It responds in the way it has been programmed to respond and relies on the historic data inputs to build and train the model – mimicking the human decision-making process to deliver insights. Once the exclusive domain of data scientists and programmers, AI has now moved to the mainstream. A PwC report on the potential impact of AI in the Middle East estimates that the Middle East will generate 2% of the global benefits of AI, equivalent to US$320 billion, by 2030. But despite the clear business benefits, transparency and ethics are globally significant issues surrounding it and businesses do not trust the analytic insights generated from their data enough to use them for decision-making.
In many cases, this form of distrust is a hallmark of low analytic maturity levels. According to the International Institute for Analytics (IIA), the majority of organizations still sit at the ‘spreadsheet stage’ of analytics – ranking at just 2.2 out of 5 on the Analytic Maturity Scale.
This spreadsheet stage is symptomatic of a wider issue – one defined by low accessibility, low data quality, and little in the way of governance, sovereignty, or quality control. This operating environment is the perfect breeding ground for biased AI insights.
Read more: Integration of AI in mental healthcare
Building and utilizing ethical AI
In operational terms, the development team will usually receive a request for an analytic model to perform a specific task. In building that AI model, the development team will invariably request datasets from the relevant teams such as HR. If these teams provide live production data – a list of CVs from the last 10 years, for instance – instead of a carefully selected and cleaned dataset, then this data will likely contain a large number of biases for any number of specific reasons. For example, if we are in high tech and are searching for developers and engineers, it may be biased against women and towards men. Due to historical hires within companies, the algorithm correlates specific – widely held – qualities with success in-role.
When delivering ethical AI-driven insights, there are three core requirements. First is the need for data. The second is for that data to be of a high enough representative quality to deliver valid insights when fed into AI models. The third stage is for data workers to package and deliver this data – passing it back up the analytics chain.
How AI is built is the responsibility of both developers and business leaders, but with the data pipeline used to feed the model often collected, contextualized, and delivered by individual departments, these domain experts require knowledge of the ethical and pragmatic use of data to avoid any opportunity for bias to creep in. From a business perspective, there are over 20 mathematical definitions of fairness – which one the company should use in a business decision that should not be left solely to the developer.
Karl Crowther, VP, MEA, at Alteryx
Mitigating AI Bias to deliver the Business Strategy
Without the right cross-departmental skill sets, data knowledge, and governance factors, the data selected to feed AI models can not only be flawed but can also be incomplete or non-compliant. It could also contain unmitigated elements of historical systematic bias. Rather than relying on a small team of data experts, businesses must instead ensure people with a diverse range of perspectives and lived experiences are included in any AI project – delivering quality assurance at the source of the data.
By bringing diverse and upskilled knowledge workers on board at that foundational data level, businesses can ensure that the ones closest to the problem – and closest to the datasets – are best positioned to highlight any errors, anomalies, or misunderstandings within that data. This results in a more robust strategy to highlight potentially biased datasets before that biased data is fed into an AI model, and where that bias will become amplified.
In delivering this AI future and ensuring businesses can derive the greatest benefit from AI systems, one thing is clear. Layering on technology after technology alone will not deliver trusted or ethical insights at scale. Only through the combination of quality data, diverse human intelligence, and robust governance processes will AI become the force needed to deliver bias-free automated decision intelligence.
Click here for more op-eds on tech.