Explaining AI: The importance of transparency and explainability

As AI solutions become more prevalent, customers and regulators are demanding increased information regarding what this new technology does and how it is being used.

In Europe there is also a strong focus at governmental level on the ethical deployment of AI, and transparency forms an important part of this. Being able to explain AI, particularly where it is used to make decisions about people is often seen by regulators as essential for those organisations wishing to bring their customers, regulators and supply chain along with them on their AI journey. But is it always possible (or sensible) to explain AI? In this briefing we look at why explaining AI is important, and how (according to the UK’s data regulator) organisations should go about explaining their AI use.

 

This material is provided for general information only. It does not constitute legal or other professional advice.

Contact Information
Rob Sumroy
Partner at Slaughter and May
Natalie Donovan
PSL Counsel and Head of Knowledge Tech and Digital at Slaughter and May