Collaboration with the National AI Centre (NAIC) on the development of the Guidance for AI Adoption

As artificial intelligence (AI) systems become deeply integrated into government, industry, and everyday life, trust is fast emerging as the cornerstone of responsible adoption. To help Australian organisations deploy AI safely and effectively, the National AI Centre (NAIC) at the Department of Industry, Science and Resources (DISR) released the Australian Voluntary AI Safety Standard (VAISS) in September 2024, a foundational framework for managing AI risks responsibly.

Over the past year, our team at CSIRO’s Data61 Privacy Technology Group has worked closely with DISR NAIC on the first update to VAISS: the Guidance for AI Adoption, released in October 2025.

This Guidance builds on VAISS by condensing 10 guardrails into 6 essential practices and expanding our audience to developers as well as deployers. It provides organisations with concrete guidance on how to integrate AI safely, ethically, and transparently across their operations.

Our Contribution: Defining AI System Transparency and Explainability

Our group led the development of the AI System Transparency and Explainability section within the Guidance (Sections 4.2 and 4.4), an area critical for accountability and public trust.

Transparency and explainability are not just technical features; they are socio-technical commitments that determine how AI systems communicate their purpose, operation, and limitations to those affected by them. This is especially vital for general-purpose AI (GPAI), where models can behave in unexpected or hard-to-explain ways.

Why Transparency and Explainability Matter

Transparency and explainability enable:

  • Accountability: Decision-makers can trace outcomes back to model logic and training data.
  • User trust: Stakeholders are more likely to adopt AI they understand.
  • Regulatory compliance: Organisations can demonstrate alignment with privacy, fairness, and safety requirements.
  • Resilience: Transparent systems are easier to monitor, debug, and adapt when AI behaviours shift.

In practice, achieving explainability is an ongoing process. It requires collaboration across technical teams, policy experts, and end users, aligning communication with evolving legal, ethical, and social expectations.

Looking Ahead

The Guidance for AI Adoption represents an important milestone in Australia’s journey toward responsible AI. It translates principles into operational practices, ensuring that AI systems deployed across sectors are safe, transparent, and accountable.

Our collaboration with DISR NAIC demonstrates how research, policy, and industry can work together to shape a national approach to trustworthy AI. We look forward to continuing this work, including deeper technical evaluations and scalable solutions that make transparency and explainability tangible for all organisations adopting AI in Australia.

Reference

Voluntary AI Safety Standard (VAISS): https://www.industry.gov.au/publications/voluntary-ai-safety-standard

Transition from VAISS to the Guidance: https://www.industry.gov.au/publications/guidance-for-ai-adoption/how-we-developed-guidance

Mapping of clauses between VAISS and the Guidance: https://www.industry.gov.au/publications/guidance-for-ai-adoption/crosswalk-vaiss-x-implementation-practices

Guidance for AI Adoption – Foundations: https://www.industry.gov.au/publications/guidance-for-ai-adoption/guidance-ai-adoption-foundations

Guidance for AI Adoption – Implementation Practices: https://www.industry.gov.au/publications/guidance-for-ai-adoption/guidance-ai-adoption-implementation-practices

Acknowledgements from DISR NAIC: https://www.industry.gov.au/publications/guidance-for-ai-adoption/acknowledgements