Updated: Wednesday, May 3rd 2023, 10:56:51 am
In light of the COVID pandemic, adoption of AI has become critical for business survival. AI systems are being deployed to enhance and scale-up customer service and sales operations whilst cutting down on labour cost and improving productivity. While AI is leading digital transformation in the business landscape, an important conversation needs to happen regarding AI ethics. The ethical challenges in the deployment of AI arise from the rampant collection, analysis, and usage of data that are used to feed algorithms or machine learning models. A major risk concerning AI is that biased and poor data and algorithms can produce bad outcomes which can affect businesses internally by not providing them required insights and externally by giving customers the perception that they are being marginalised or excluded.
Companies like IBM, Facebook, and Goldman Sachs have come under fire for ethical violations, instances of bias in their AI frameworks. The city of Los Angeles sued IBM for collecting consumer data unethically through its weather app. Healthcare company Optum was investigated by regulators who found that the algorithm was racial biased wherein it recommended doctors and nurses to pay more attention to white patients than black patients. Similarly, Goldman Sachs was investigated due to the use of an AI algorithm that inherently discriminated against women by granting men larger credit limits than their counterparts on their Apple cards. Amazon, one of the biggest tech companies in the world, realised in 2015 that their hiring AI algorithms were biased against women. Upon further probing, Amazon realised that because the algorithm was based on the number of applications submitted over the last ten years, wherein there were more men applicants than women, the AI trained itself to favour men over women. This revelation ultimately leads to Amazon disassembling the program as it couldn’t figure out a way to eliminate bias in the AI systems. These are a few of the real-life examples of discriminating artificial intelligence.
Even though corporations have a tough time coping up with the pace of AI innovations, it is their moral obligation to regularly check their AI frameworks for potential risk and biases. Performance testing is a critical step for businesses to ensure that the inputs into their machine learning programs such that the outcomes do not endanger customer trust. Ethical AI is a necessity today to maintain customer loyalty and trust as well as the company’s reputation in the marketplace. Studies have shown that 80% of companies will go out of business by 2025 if they do not deploy AI so it is even more important for business leaders and entrepreneurs to understand the ethical responsibility of artificial intelligence towards society and the implications of its absence.
AI ethics refers to the organizational establishment that differentiates between the right and wrong in the guiding principles of AI technologies. AI solutions must be free from bias to be able to produce ethical outcomes that are fair to the employees, customers, and society in general. To build a more ethical AI, businesses need to internally evaluate and test the data to ensure that it’s free of social and cultural inaccuracies. Fundamentally, what business leaders need to understand is that AI applications must be equipped to provide the required results or functions ethically.
AI bias is the outcome in the machine learning process when the algorithm produces systematically discriminated results due to poor data being fed into it. An AI algorithm is as ethical as the data being used to train it. When programmers create algorithms with faulty or incomplete data, it could lead to unintended real-life prejudice, as seen in the case with Amazon. Although in the majority of the scenarios the bias or prejudice is unintentional, it has massive significance in terms of real-world implications. AI bias can often lead to dissatisfaction customer experience, lower sales revenue, and/or possibly illegal actions. AI bias can be of different types:
AI bias is a result of prejudice in the data that is used by companies to create AI algorithms. To put simply, if I were to create an algorithm for a healthcare service using data about medical professionals that includes only female nurses and male doctors then the AI will automatically introduce real-life gender prejudice in the computer systems. The framework would leave out male nurses and female doctors from its insights and analysis as the training data used in it were incomplete and faulty. This is just one example of how erroneous data can lead to AI bias which has real-life implications. Unethical AI can lead to discrimination, reinforce stereotypes, and enable oppression.
To truly create a workplace that’s diverse and equal, we need to invest in ethical AI that is devoid of comprehensive and cognitive bias, unlike humans.
Companies adopting socially responsible AI are helping to build a more ethical economy. Companies can identify and implement human-complementing AI functionalities that deliver ethical and conscionable results to industries, employees, customers, and communities. AI technologies will never be ethical unless we proactively work to make it so.