Sharing the third installment of Auditoria’s latest eBook “Demystifying AI Governance and Control for Finance: A Guide to AI Governance.”
In Part 3 of 4 of our social and blog posts, we’ll explore trustworthiness and accountability in AI and the importance of human-in-the-loop interfaces of AI.
Trustworthy AI depends upon accountability, and accountability presupposes transparency. Transparency reflects the extent to which information about an AI system and its outputs is available to individuals interacting with such a system.
Meaningful transparency provides access to appropriate levels of information based on the stage of the AI lifecycle. Transparency should be tailored to the role or knowledge of individuals interacting with or using the AI system.
An AI actor is an individual responsible for contextual decisions relating to how the AI system is used to ensure the deployment of the system into production.
Examples include system integrators, software developers, end users, operators and practitioners, evaluators, and domain experts with expertise in human factors, socio-cultural analysis, and governance.
By promoting higher levels of understanding, transparency increases confidence in the AI system. Transparency is often necessary for actionable responses related to AI system outputs that are incorrect or otherwise lead to negative impacts. Transparency should be based on human-AI interaction.
Validation provides objective evidence that the requirements for a specific intended use or application have been delivered. Deployment of AI systems that are inaccurate, unreliable, or poorly generalized to data and settings beyond their training creates and increases negative AI risks and reduces trustworthiness.
Reliability is the ability of the system to perform as required, without failure, for a given time interval, under given conditions. Reliability is a goal for the overall correctness of AI system operation under the conditions of expected use and over a given period of time, including the entire lifetime of the system.
Validity and reliability for deployed AI systems are often assessed by ongoing testing or monitoring that confirms a system is performing as intended. Measurement of validity, accuracy, robustness, and reliability contribute to trustworthiness.
To properly assess an AI system’s transparency, validation, and reliability, organizations should be sure the software:
While AI systems will automate tasks and provide valuable insights, software lacks the nuanced understanding, empathy, and adaptability of humans. Therefore, incorporating human oversight and judgment is vital to ensure that AI technologies are deployed safely, ethically, and effectively.
In a recent report on the place of advanced technology in corporate enterprise, Gartner highlighted the importance of humans in the AI loop (HITL). HITL will help safeguard against inaccurate data that may lead to poor decisions and other adverse outcomes. AI could only do so much independently without a set of guidelines or a comprehensive plan.
While AI is indeed capable of learning new things on its own, its learning process needs to be directed by a human. Therefore, the responsibility of humans in governance is to make sure AI models follow the law and apply it in a way that is consistent with their training.
But training and following the regulations are only the beginning of human oversight. The outcomes generated by AI must also be subject to human oversight. Making sure the outcomes are consistent with expectations is the main objective. This inspection aids in discovering data biases and training errors.
Human professionals should monitor and validate the outputs of generative AI models, ensuring that they align with ethical and regulatory standards. Clear guidelines and protocols should be established to determine when and how human intervention is necessary.
Oversight involves continuous monitoring and evaluation of AI systems to guarantee they function as intended, comply with legal and ethical standards, and do not infringe on human rights or freedoms. This includes the assessment of data sources, algorithms, and outputs for biases or errors.
Accountability, on the other hand, ensures that individuals or organizations responsible for the development and deployment of AI systems are held answerable for their performance and impact. It requires clear protocols for identifying and addressing any misuse or harmful consequences of AI, and mandates transparency in AI operations.
Together, human oversight and accountability play a fundamental role in building trust in AI technologies, safeguarding against abuses, and ensuring AI contributes positively to society.
AI models are limited to the data they were trained on. In novel or unforeseen situations, they may produce unreliable or incorrect outputs. Humans should step in to provide a rational response in such cases.
Human judgment should be employed when deciding on the specific metrics related to AI trustworthiness characteristics and the precise threshold values for those metrics.
AI risk management efforts should prioritize the minimization of potential negative impacts, and may need to include human intervention in cases where the AI system cannot detect or correct errors.
Ultimately, humans are accountable for the actions and decisions made by AI systems. If something goes wrong, it’s important to have a human in the loop who
Fill out the form and download your complimentary copy of the Auditoria.AI eBook.
"Demystifying AI Governance and Control for Finance - A Guide to AI Governance"