A Fairer AI Future: Tackling Data Bias and Accountability

During this year’s Public Sector AI Week, Sara Boltman, co-founder of Butterfly Data, spoke about the growing role of artificial intelligence in government and public services. Drawing from her background in data science and machine learning, she highlighted AI's potential to improve decision-making while warning of the risks of bias without proper oversight.

scales with gender either side

Misconception of Fairness in Data

Sara had once hoped that data-driven decisions would eliminate bias. But she soon realised that AI can unintentionally reinforce, or even worsen, inequalities.

She shared her early experience of being in the car insurance industry. Insurers like Sheila’s Wheels once used gender to set premiums, offering women lower rates as data indicated that they were statistically safer drivers, prior to this gender discrimination being banned in 2012. Then when machine learning tools like 'Automated Decision Support' emerged, they combined historical data with demographics to calculate quotes. This created a feedback loop: customers who received uncompetitive quotes left for other insurers, leaving behind those who fit certain risk profiles. Over time, this reinforced hidden biases, unfairly penalising some demographics.

AI bias can also occur if data sets are removed. Take for instance, the gender pay gap. Even when data sets exclude gender, AI models may still infer it using proxies like job role or part-time work patterns, which can in turn reinforce pay inequalities. Sara explained that this bias can become entrenched over time if not carefully addressed.

These kinds of data challenges aren’t limited to insurance, or even gender. Algorithms that assess benefit eligibility, identify high-risk healthcare patients, or determine school funding, for instance, can also deepen inequalities if not carefully monitored.

The Need for AI Transparency and Accountability

AI adoption brings challenges. Automated tools that categorise names, addresses, or incomes often rely on proprietary data, making it difficult to spot and fix biases. Without transparency, these systems risk perpetuating discrimination.

Sara stressed the importance of strong AI governance, citing Mustafa Suleyman’s book ‘The Coming Wave’, where he warns that AI is advancing faster than we can control it. She outlined what she believes are the main priorities for safer AI:

  • Using diverse and representative data to reduce bias

  • Creating clear regulations to define accountability when AI causes harm

  • Establishing independent oversight to audit AI systems and ensure ethical practices

  • Investing in safe AI development that prioritises fairness over profit

AI Governance in the Public Sector

The recently published AI Playbook from the UK Government offers guidelines for ethical AI, focusing on human oversight, security, and fairness. Sara emphasised the need to upskill public sector workers and build diverse teams to improve AI literacy and accountability.

She also stressed that international collaboration is vital. AI is a global technology, and coordinated regulation is key to ensuring its responsible development.

Shaping AI for a Fairer Future

Sara’s message was clear: AI will shape the future, but we must shape AI responsibly. With strong governance, a focus on fairness, and clear accountability, we can ensure AI benefits everyone.

Watch Sara's full talk from the Public Sector AI Week for more insights:

Get in touch with us to discuss any data governance or AI challenges you are facing.




Next
Next

White Paper - Risk Identification in Public Contracts: the role of Generative AI and the Company Analysis Tool