Artificial Intelligence (AI) Transparency Statement

The Department of Finance (Finance) is committed to the safe and responsible use of artificial intelligence (AI). We consider AI to offer significant opportunities to improve productivity and service delivery while preserving accountability, transparency and public trust.

As a central agency, we provide high quality advice, frameworks and services to achieve value in the management of public resources for the benefit of all Australians. As the lead agency responsible for key initiatives under the APS AI Plan, we aim to lead by example in the responsible adoption and use of AI across government, supported by strong governance, clear accountability and transparency.

Our approach to AI

The AI Delivery and Enablement (AIDE) function within Finance is responsible for driving the delivery of the APS AI Plan, accelerating AI adoption across the APS, addressing common barriers and monitoring AI implementation, coordinating the APS network of Chief AI Officers and developing metrics for successful AI adoption and use.

The GovAI project directly supports this approach by providing a secure, APS-only environment for exploring AI tools, experimenting safely with AI and data, and sharing early use cases and lessons learned across government. Finance is using this platform to develop and operate GovAI Chat, a secure general purpose AI tool designed to expand access to generative AI across the APS in a controlled and secure environment.

In the use of AI within Finance, we follow whole-of-government policy and support our staff in low-risk use cases, building our internal AI capability as well as the Digital Transformation Agency’s Policy for the responsible use of AI in government (the Policy).

We also apply Australia’s AI Ethics Principles and use the Organisation for Economic Co-operation and Developments (OECD) definitions to help identify which of our systems use AI.

When we consider our use of AI, we apply the DTA’s Pilot AI assurance framework, the AI impact assessment tool and the Technical standard for government’s use of artificial intelligence to help us assess impacts, identify risks and put the right safeguards in place.

How we measure the effectiveness of our AI

We monitor AI adoption, training completion, productivity impacts and AI-related incidents to assess effectiveness and inform updates to our governance settings, guidance and training to ensure AI use remains proportionate, safe and beneficial.

Our AI solutions are approved centrally, with risks managed at the enterprise level. This includes standing approvals for low-risk uses, such as using Microsoft 365 Copilot to draft emails with information classified up to OFFICIAL.

Our staff use AI to support everyday work, including making documents accessible, supporting policy analysis, summarising information, analysing data, answering policy and process questions, drafting and reviewing content, writing and debugging code, and preparing meeting minutes and transcripts.

If staff want to use AI beyond these central approvals, they must complete an AI impact assessment and register their use as required by the DTA’s Policy.

We assess AI solutions and compare against criteria drawn from Australia's AI Ethics Principles. Each assessment is rated low, medium or high which helps us decide if the AI use case can be used and what extra controls might be needed.

We won’t approve AI solutions or uses that still carry a medium or high level of risk after safeguards are applied. Where residual risk remains uncertain or insufficiently understood, deployment is deferred until risks can be appropriately mitigated. Any proposal assessed as high risk is also reported to the DTA.

How we use AI

We allow our staff to use AI to improve productivity and help deliver the advice, frameworks and services that Finance provides to government and other agencies.

Finance provides staff with access to AI tools in two ways. In our internal environment, staff can use Microsoft 365 Copilot, which operates within Finance’s Protected network and can be used with information up to PROTECTED, subject to our internal governance arrangements.

Staff use public generative AI tools to enhance productivity and support their core responsibilities. When used outside Finance’s environment, only unclassified information up to OFFICIAL may be entered. Personal, sensitive or classified information must not be entered.

Compliance with these requirements is monitored through our information security and governance frameworks.

Our AI patterns and domains

Finance classifies staff use of AI using the DTA’s Classification system for AI use. Our AI use case patterns and domains are:

  • analytics for insights AI pattern, where we use AI to help analyse information, mainly in the Scientific and Policy and legal domains, using low‑risk information
  • workplace productivity AI pattern, where we use AI to help staff with routine tasks, mainly in the Service delivery and Corporate and enabling domains.

We do not use AI within the decision making and administrative action or Image processing usage patterns, or the compliance and fraud detection, and Law enforcement, intelligence and security domains.

How we govern our AI use

We govern our internal AI in line with applicable laws and regulations, the DTA’s Policy and best practice.

Our AI Governance Committee

We have established an AI Governance Committee (AIGC) as well as appointed an Accountable Official (AO) and a Chief AI Officer (CAIO) to oversee how Finance adopts and uses AI across the department.

The AIGC is jointly chaired by the AO and CAIO, with membership drawn from senior executives across the department. It supports safe and responsible AI use by providing oversight of key decisions, risks and controls for AI use at Finance. The functions of AIGC include:

  • promoting a culture of safe and responsible AI use across our workplace
  • monitoring, assessing and managing AI‑related risks and opportunities
  • overseeing and implementing policies and guidance from the DTA.

Our internal policies and processes

Finance has policies and processes for the adoption and use of AI by our staff, including:

  • Acceptable use policy
  • Guidance on the use of generative artificial intelligence
  • Information security management policy framework
  • Information and data policy
  • Privacy policy
  • Risk management policy framework.

These policies and processes are regularly reviewed to ensure they remain fit for purpose.

We provide our staff with guidance and training on the safe and responsible use of AI. Staff are required to complete this training prior to being granted access to our secure internal AI.

Finance is committed to the full implementation of the Policy and to maintaining ongoing compliance. This commitment includes reviewing and updating this statement annually, or sooner where there is a significant change to our approach to artificial intelligence.

Who to contact regarding our statement

For any questions regarding this statement, or for more information about how Finance uses AI, please email feedback@finance.gov.au.

Authorisation

This statement is authorised by Finance’s Accountable Official, the First Assistant Secretary, Information, Communication and Technology Division, and the Chief AI Officer, First Assistant Secretary, Artificial Intelligence Delivery and Enablement Division.


Did you find this content useful?