Trustworthy AI for Government: Collaboration with the Department of Finance’s GovAI Team

As artificial intelligence continues to reshape industries and public services, governments worldwide are seeking to harness its potential responsibly, ensuring that AI systems are trustworthy, transparent, and secure.

At CSIRO’s Data61, we are working closely with the GovAI team at the Department of Finance to pioneer six Trusted AI Use Cases for Government. This national collaboration is focused on identifying and scaling trustworthy AI applications that align with the Australian Government’s priorities for sovereign, secure, and ethical AI adoption.

The six use cases are illustrated below.

Within this initiative, our team is leading two use cases that demonstrate how next-generation AI can enhance government capability while upholding privacy, security, and policy integrity.

  1. AI Guardrail Assistant: Making AI Safe for Sensitive Government Use

Generative AI tools such as large language models (LLMs) offer powerful capabilities, but they also introduce new risks, such as data leakage and policy non-compliance. Unmanaged AI systems can expose sensitive government information or violate privacy and cybersecurity obligations. The AI Guardrail Assistant is designed to tackle these challenges head-on.

Value Proposition:

  • Prevents sensitive data leakage through content-aware detection and enforcement mechanisms.
  • Enforces privacy, cybersecurity, and policy obligations securely within a government tenant.
  • Strengthens resilience against prompt injection, misconfiguration, and evolving AI misuse.

Technical Foundations:

  • Graph-based policy extraction to map complex privacy and security rules into machine-readable logic.
  • Large language models for sensitive information protection and government policy interpretation.
  • Fuzzy matching and rule-based pattern recognition for nuanced detection of sensitive data and policy breaches.
  • Retrieval-Augmented Generation (RAG) to ensure guardrail decisions are contextually informed and customisable.

Together, these elements create a policy-aware AI defence layer, enabling agencies to deploy generative AI tools confidently, without compromising compliance or trust.

  1. Document Sensemaking at Scale: Turning Information Overload into Insight

Government agencies manage vast and ever-growing collections of documents, from policy briefs to technical reports, contracts, and correspondence. Finding and synthesising relevant information is increasingly difficult, even with modern search tools.

Our second use case, Document Sensemaking at Scale, is a Retrieval-Augmented Generation (RAG) AI system tailored to government workflows. It helps analysts and decision-makers rapidly synthesise insights from curated document collections while maintaining transparency and accuracy.

Value Proposition:

  • Provides goal-aligned document summarisation and insight synthesis.
  • Integrates human-in-the-loop feedback to continuously improve relevance and reduce hallucinations or bias.

Technical Foundations:

  • Use case–specific LLM selection based on document type, sensitivity, and purpose.
  • Goal-aligned RAG instruction and prompt engineering to ensure factual and policy-consistent outputs.
  • Task decomposition and modular workflow orchestration for scalable analysis.
  • Customisable RAG pipelines adaptable across different agencies and domains.

This approach allows agencies to extract meaning from scale, enabling evidence-based policy work, risk assessment, and service design powered by transparent and traceable AI systems.

Why This Matters

Both the AI Guardrail Assistant and Document Sensemaking at Scale embody the vision of Trusted AI for Government. By combining CSIRO’s deep research in privacy-enhancing technologies, cyber resilience, and responsible AI development with the Department of Finance’s leadership in public-sector innovation, this collaboration sets the stage for safe, scalable AI adoption across the Australian Government.

Towards a Trusted AI Future

These use cases are part of a broader national effort to ensure that AI systems serving Australians are aligned with Australian values — secure, accountable, and beneficial. As AI becomes embedded in the fabric of public administration, CSIRO and the GovAI team are building the scientific and technological foundations to make trustworthy AI the default, not the exception.

Reference

Our use cases are available on the following GovAI website (accessible with GovTeams credentials):

https://www.govai.gov.au/explore/use-cases