The arrival of ChatGPT and other large language models (LLM) brought the concept of AI ethics into the mainstream. This is good because it shines a light on an area of work that has been tackling these issues for some time. This is the area of Responsible AI, and it’s not just about ChatGPT and LLMs, it’s about any application of AI or machine learning that can have an impact on people in the real world. For example, AI models can decide whether to approve your loan application, advance you to the next round of job interviews, nominate you as a candidate for preventive health care, or determine whether you’re going to reoffend on parole.
While the field of Responsible AI is gaining traction in the enterprise (in part due to impending regulation such as the EU’s AI Act), there are issues with current approaches to implementing Responsible AI. Perhaps due to AI and data illiteracy in large organizations, the task of Responsible AI is often left to data science teams. These teams are typically made up of scientists tasked with designing and building efficient and accurate AI models (most often using machine learning techniques).
The key point here is that it is not the right approach to assign the teams building the models (and the technologies they use) the job of objectively evaluating these models.
Industries outside of AI have a long and effective history of demanding independence in audits. As required by the Securities and Exchange Commission (SEC) in the United States, a company’s financial auditor must be completely independent of the company in question. From the SEC.Ensuring the independence of the auditor is as important as the proper reporting and classification of revenues and expenses.”
Independence is also a key requirement in the Model Risk Management (MRM) process; a process by which statistical models developed in financial institutions are independently tested and verified. The three levels of MRM (Model Development, Model Validation and Internal Audit) must each maintain strict independence from each other.
Therefore, we should not ignore this valuable history of audit independence when implementing Responsible AI. In this field, AI models and data must be measured so that aspects such as fairness, equity, privacy, sustainability, etc. can be quantified and evaluated against the organization’s processes, principles, and frameworks.
The independence of responsible AI must apply to both the people conducting the assessments and the technologies they use to do so. This is important because:
- People can protect themselves from the models they build. This is quite understandable, as they probably put a lot of time and effort into creating this model; however, with this in mind, they fail to objectively evaluate their own work.
- AI models are often built and trained with custom code written by data scientists. Humans make mistakes in all lines of work, in this context it will lead to errors or bugs in the code. Good software practice promotes code reuse, so it is likely that the same code will be used to evaluate models.
- When designing an AI model and processing data, humans make assumptions and judgments during the process (and these are often coded into the software). A thoroughly independent process should not be based on these assumptions.
- Automated software tools can generate models for the data scientist (these technologies are often called AutoML tools). They are marketed as faster, easier and cheaper to create a model than by hand. However, if they provide the technical measurements of the models they have just built, they are simply evaluating their homework.
- An enterprise (or government) organization will likely have many models, not just one. In order to effectively manage these models at scale, quantitative indicators must be comparable between models. If modeling teams create new metrics that they deem appropriate for each of their models, it will be nearly impossible to compare them across the scale of corporate standards.
By bringing broader teams and technologies into the Responsible AI process, you also benefit from a diverse set of skills and perspectives. The task of responsible AI requires skills in ethics, legal, governance, compliance, and law (to name just a few), and those applying these skills must be armed with independent quantitative standards on which they can rely.
As technologies like ChatGPT raise awareness of the ethical issues surrounding AI, more and more CEOs are realizing the unintended consequences of their own AI. While they are not going to understand the technical details of their AI, an Effective Responsible AI process gives them confidence that the appropriate guardrails are in place.
While the fields of artificial intelligence and machine learning are advancing rapidly, and teams are just beginning to address the ethical and regulatory issues associated with them, the principles of effective auditing are not new. As teams design their Responsible AI processes, it’s worth taking a moment to look at what’s already known.
about the writer
Dr. Stuart Battersby is Chief Technology Officer of Chatterbox Labs and holds a Ph.D. Chatterbox Labs is a leading AI software company whose AI Model Insights platform independently validates enterprise AI models and data.
Sign up for the free insideBIGDATA newsletter.
Join us on Twitter. https://twitter.com/InsideBigData1
Join us on LinkedIn. https://www.linkedin.com/company/insidebigdata/
Join us on Facebook at https://www.facebook.com/insideBIGDATANOW