Cornerstones of assurance

In October 2023, the Department of Prime Minister and Cabinet (PM&C) published How artificial intelligence might affect the trustworthiness of public service delivery? (PM&C 2023).

The report identified that current trust in AI is low, and developing community trust would be a key enabler of government adoption of AI technology.

Alignment to Australia’s AI Ethics Principles, developed by the CSIRO’s Data61 and DISR, will ensure the trustworthy use of AI by governments in Australia. Each of its 8 ethics principles inform the assurance practices found in this framework and are also consistent with the Australian government’s broader work on safe and responsible AI.

They will help governments demonstrate and achieve:

  • safer, more reliable and fairer outcomes for all
  • reduced risk of negative impact on those affected by AI
  • the highest ethical standards when designing, developing and implementing AI.

To effectively apply the AI ethics principles, governments should also consider the following cornerstones for their assurance practices.

Governance

AI governance comprises the organisational structure, policies, processes, regulation, roles, responsibilities and risk management frameworks that ensures the safe and responsible use of AI in a way that is fit for the future.

The use of AI presents challenges that requires a combination of technical, social and legal capabilities and expertise. These cut across core government functions such as data and technology governance, privacy, human rights, diversity and inclusion, ethics, cyber security, audit, intellectual property, risk management, digital investment and procurement.

Implementation of AI should therefore be driven by business or policy areas and be supported by technologists.

Existing decision-making and accountability structures should be adapted and updated to govern the use of AI. This reflects the likely impacts upon a range of government functions, allows for diverse perspectives, designates lines of responsibility and provides clear sight to agency leaders of the AI uses they are accountable for.

Governance structures should be proportionate and adaptable to encourage innovation while maintaining ethical standards and protecting public interests.

At the agency level, leaders should commit to the safe and responsible use of AI and develop a positive AI risk culture to make open, proactive AI risk management an intrinsic part of everyday work.

They should provide the necessary information, training and resources for staff to have the knowledge and means to:

  • align with the government’s objectives
  • use AI ethically and lawfully
  • exercise discretion and judgement in using AI outputs
  • identify, report and mitigate risks
  • consider testing, transparency and accountability requirements
  • support the community through changes to public service delivery
  • clearly explain AI-influenced outcomes.

Data governance

The quality of an AI model’s output is driven by the quality of its data.

It’s therefore important to create, collect, manage, use and maintain datasets that are authenticated, reliable, accurate and representative, and maintain robust data governance practices that complies with relevant legislation.

Data governance comprises the policies, processes, structures, roles and responsibilities to achieve this and is as important as any other governance process. It ensures responsible parties understand their legislative and administrative obligations, see the value it adds to their work and their government’s objectives.

Data governance is also an exercise in risk management because it allows governments to minimise risks around the data it holds, while gaining maximum value from it.

A risk-based approach

The use of AI should be assessed and managed on a case-by-case basis. This ensures safe and responsible development, procurement and deployment in high- risk settings, with minimal administrative burden in lower-risk settings.

The level of risk depends on the specifics of each case, including factors such as the business domain context and data characteristics. Self-assessment models, such as the NSW Artificial Intelligence Assurance Framework, help to identify, assess, document and manage these risks.

Risks should be managed throughout the AI system lifecycle, including reviews at transitions between lifecycle phases. The OECD defines the phases of an AI system as:

  1. design, data and models - a context-dependent sequence encompassing planning and design, data collection, processing and model building.
  2. verification and validation
  3. deployment
  4. operation and monitoring.

This AI system lifecycle may be embedded within the broader project management and procurement lifecycles, and risks may need re- evaluation where a significant change occurs at any phase.

During system development governments should exercise discretion, prioritising traceability for datasets, processes, and decisions based on the potential for harm. Monitoring and feedback loops should be established to address emerging risks, unintended consequences or performance issues. Plans should be made for risks presented by obsolete and legacy AI systems.

Governments should also consider oversight mechanisms for high-risk settings, including but not limited to external or internal review bodies, advisory bodies or AI risk committees, to provide consistent, expert advice and recommendations.

In focus: risk-based regulation

The Australian Government’s 2023 ‘Safe and Responsible AI in Australia’ consultation found strong public support for Australia to follow a risk-based approach to regulating AI.

As set out in the government’s interim response, the government is now considering options for mandatory guardrails for organisations designing, developing and deploying AI systems in high-risk settings.

This work focuses on testing, transparency and accountability measures and is being informed by a temporary AI expert group.

Standards

Where practical, governments should align their approaches to relevant AI standards. Standards outline specifications, procedures, and guidelines to enable the safe, responsible, consistent, and effective implementation AI in a consistent and interoperable manner.

Some current AI governance and management standards include:

  • AS ISO/IEC 42001:2023 Information technology - Artificial intelligence - Management system
  • AS ISO/IEC 23894:2023 Information technology - Artificial intelligence - Guidance on risk management
  • AS ISO/IEC 38507:2022 Information technology - Governance of IT - Governance implications of the use of artificial intelligence by organizations

Governments should regularly check the Standards Australia website for new AI related standards.

Procurement

Careful consideration must be applied to procurement documentation and contractual agreements when procuring AI systems or products. This may require consideration of:

  • AI ethics principles
  • clearly established accountabilities
  • transparency of data
  • access to relevant information assets
  • proof of performance testing throughout an AI system’s life cycle.

It is essential to remain mindful of the rapid pace of AI advancements and ensure contracts are adaptable to changes in technology.

Governments should also consider internal skills development and knowledge transfer between vendors and staff to ensure sufficient understanding of a system’s operation and outputs, avoid vendor lock-in and ensure that vendors and staff fulfill their responsibilities.

Due diligence in procurement plays a critical role in managing new risks, such as transparency and explainability of ‘black box’ AI systems like foundation models. AI can also amplify existing risks, such as privacy and security. Governments must evaluate whether existing standard contractual clauses adequately cover these new and amplified risks.

Consideration should be made to a vendor’s capability to support the review, ongoing monitoring or evaluation of a system’s outputs in the event of an incident or a stakeholder raising concerns. This should include providing evidence and support for review mechanisms.

Governments may face trade-offs between a procured component’s benefits and inherent assurance challenges, and resolutions will vary according to use case and tolerance threshold.

Ultimately, procurement should prioritise alignment with ethics principles alongside delivering on a government’s desired outcomes.

In focus: responsible use of generative AI

Generative AI (also known as foundational models, large language models or LLMs) has garnered wide attention since the public release of ChatGPT in November 2022.

Whereas traditional AI has focused primarily on analysing data and subsequently making predictions, generative AI is able to create content across a wide range of mediums, including text, images, music and programming code, based on instructions or prompts provided by a user and informed by large datasets.

Recognising the potential and risk of generative AI, governments across Australia have released guidance for its use in the public service, including:

Common across government guidance is focus on human oversight and human accountability for the use of content produced using generative AI to ensure compliance with policies, legal obligations and ethical principles.

This includes instructions on the use and protection of classified or sensitive information including personal information. 


Did you find this content useful?