Original blog written by Nicholas Sewe, World Benchmarking Alliance
The key issue…is that companies, in their race to develop and deploy these technologies, often fail to ensure that algorithms are thoroughly tested and regularly audited for fairness, transparency and accountability. When things go wrong, the consequences can be far-reaching, affecting not just individuals but entire communities.
World Benchmarking Alliance, June 2025
This blog explores the urgent need for stronger corporate accountability in the governance of artificial intelligence. As AI becomes more embedded in sectors ranging from healthcare and finance to recruitment and cyber security, the risks of bias, discrimination, and human rights violations are growing, particularly in regions lacking robust regulation. While initiatives like the UN Global Digital Compact are a positive step, the blog stresses the importance of enforceable mechanisms to ensure AI technologies benefit society as a whole.
Through real-world examples, the piece highlights how AI systems, if left unchecked, can cause significant harm, such as biased loan or hiring decisions that disproportionately affect marginalised groups. The blog argues that ethical AI governance should be a continuous, proactive process involving independent audits, transparency, and clear avenues for redress. It calls on not only companies but also governments and consumers to take shared responsibility for ensuring AI is developed and used in ways that are fair, accountable, and socially responsible.
Find out more below: