Shaping the future: Australia’s approach to AI regulation
Recent updates
There is finally some clarity around how artificial intelligence (AI) regulation is going to look in Australia. The Australian Government has released a proposals paper for introducing mandatory guardrails for AI in ‘high risk’ settings (Proposals Paper), as well as a Voluntary AI Safety Standard (Voluntary Standard) following its consultations on safe and responsible AI.
There is currently no regulation of AI in Australia, and the Australian Government has remarked that, following its consultations, it considers the current regulatory system to be unfit-for-purpose to respond to the distinct risks posed by AI. The Proposals Paper outlines the mandatory guardrails which the Australian Government is proposing to implement for the use of AI in ‘high-risk settings’ in Australia as part of its approach to regulation of AI.
The Voluntary Standard consists of voluntary guardrails designed to give businesses certainty ahead of the introduction of legislation and mandatory guardrails. Most of the voluntary guardrails are aligned with the mandatory guardrails under consideration in the Proposals Paper (except for voluntary guardrail #10, which is around stakeholder engagement, whereas conformity assessments are being proposed under mandatory guardrail #10). The voluntary guardrails are applicable to AI systems of any risk level, as opposed to the mandatory guardrails which will apply to high-risk AI systems only.
The Voluntary Standard is intended by the Government to be a measure of best practice to assist Australian organisations with the practical development and deployment of AI throughout the AI supply chain, by guiding organisations to:
- raise the levels of safe and responsible capability across Australia
- protect people and communities from harms
- avoid reputational and financial risks
- increase trust and confidence in AI systems, services and products
- align with legal needs and expectations of the Australian population
- operate more seamlessly in an international economy.
Where organisations have implemented the voluntary guardrails, they will be well-positioned for compliance with any mandatory legislation that is subsequently introduced by the Government. Furthermore, the Voluntary Standard is aligned with international standards and is designed to align Australian practices with other jurisdictions to ensure consistency for organisations operating across multiple jurisdictions.
Voluntary Guardrails
The Voluntary Standard is made up of 10 voluntary guardrails which are ongoing (as opposed to once-off) activities for organisations:
- Establish, implement and publish an accountability process including governance, internal capability and a strategy for regulatory compliance. Accountability for the safe and responsible deployment of AI cannot be outsourced. Organisations should establish proper foundations for the use of their AI, including accountability processes. This will involve assigning an overall owner for the AI used by the organisation, implementing an AI strategy and other relevant policies, and providing strategic training as appropriate to both individuals and organisation more broadly.
- Establish and implement a risk management process to identify and mitigate risks. Organisations should take practical steps in order to implement a risk management process at an organisational-level, and system risk and impact assessments for assessing AI systems in accordance with the organisations risk appetite. Rigorous risk and impact assessments should be undertaken on an ongoing basis for each AI system.
- Protect AI systems, and implement data governance measures to manage data quality and provenance. Organisations should implement fit-for-purpose approaches to data governance, privacy and cybersecurity. Requirements for such measures will differ depending on the use case and risk profile of the AI but all must account for the unique characteristics of the AI including data quality, data provenance and cyber vulnerabilities.
- Test AI models and systems to evaluate model performance and monitor the system once deployed. AI systems must be tested at all stages of their life cycle including prior to deployment and subsequently on an ongoing basis to monitor for any behaviour changes or unintended consequences. Organisations should implement clearly defined acceptance criteria for the AI system against which AI systems can be assessed.
- Enable human control or intervention in an AI system to achieve meaningful human oversight across the life cycle. A competent person within the organisation should be accountable for each AI system and product. Human oversight will ensure an organisation (or the appropriate service provider) is able to intervene if necessary, reducing potential for unintended consequences.
- Inform end-users regarding AI-enabled decisions, interactions with AI and AI-generated content. Organisations should be transparent about the use of AI in order to create trust with users and the community at large by disclosing when AI is being used and what content is generated by AI. The organisation should determine the most appropriate mechanism for disclosure based on the AI system, the stakeholders involved and the technology in use.
- Establish processes for people impacted by AI systems to challenge use or outcomes. Organisations must establish processes for stakeholders impacted by the AI systems to challenge how the organisation is using AI and contest any output or decisions generated by the AI.
- Be transparent with other organisations across the AI supply chain about data, models and systems to help them effectively address risks. Both developers and deployers of AI have obligations of transparency and must consider safe and responsible AI practices in development / deployment. Organisations should understand what components were used in the AI system, understand how it was built and have sufficient information to understand and manage the risk of the system.
- Keep and maintain records to allow third parties to assess compliance with guardrails. Organisations must create and maintain an up-to-date, organisation wide inventory of each AI system in use by the organisation. Records should demonstrate compliance with the guardrails.
- Engage your stakeholders and evaluate their needs and circumstances, with a focus on safety, diversity, inclusion and fairness. Engagement should occur over the life of the AI system, at both the organisational and system levels.
Looking forward
The Government is seeking input from the public on the Proposal Paper including the proposed mandatory guardrails, the definition of ‘high risk’ AI and regulatory options for mandating the guardrails. Consultation is open for 4 weeks, closing 4 October 2024. In the meantime, we recommend that organisations using AI familiarise themselves with the voluntary guardrails and take practical steps to implement the guardrails, in preparation for the mandatory regime.
Want to know more?
DLA Piper is a global legal leader in advising many of the world’s most prominent companies and governments on the legal and compliance risks of creating, using and deploying AI. We have also developed tools and resources for our clients to assess their AI maturity and readiness to implement AI solutions and keep up to date with the latest AI insights. If you want to know more, check out:
- our dedicated EU AI Act app – where you can navigate, compare, bookmark, and download articles of the Act;
- AI Focus – where you can stay informed on AI developments and insights; and
- our AI and Employment Podcast series – discussing the key employment law issues facing employers arising out of AI.