In a significant move for its tech future, the Australian government has officially launched its National Artificial Intelligence (AI) Plan. Announced on Tuesday, December 2, the roadmap charts the country's course for widespread AI adoption, with a notable decision to manage the technology's emerging risks through its current legal systems rather than enacting new, specific AI laws.
Three Pillars of Australia's AI Ambition
As reported by Reuters, the government's strategy is built on three core objectives designed to integrate AI across the economy. The first pillar focuses on drawing investment into advanced data centres, which are crucial infrastructure for AI development. The second aims to build AI skills among the workforce, a move intended to support and protect jobs as the technology evolves. The third pillar prioritises ensuring public safety as AI adoption speeds up across various sectors.
Existing Laws to Govern AI Risks
This announcement marks a shift from earlier considerations of stricter, AI-specific regulations. The government has confirmed it will not introduce new, binding laws for AI at this stage. Instead, it will lean on Australia's established legal and regulatory frameworks to address potential harms.
"The government's regulatory approach to AI will continue to build on Australia's robust existing legal and regulatory frameworks," the National AI Plan stated. This means various government agencies and sector-specific regulators will be primarily responsible for identifying and managing AI-related risks within their domains.
Balancing Innovation with Safety
Federal Industry Minister Tim Ayres emphasised the plan's goal of allowing Australians to reap the benefits of AI while carefully managing its risks. "As the technology continues to evolve, we will continue to refine and strengthen this plan to seize new opportunities and act decisively to keep Australians safe," Ayres noted. The strategy seeks a balance between fostering innovation and implementing necessary safeguards.
This plan follows the government's recent commitment to establish an AI Safety Institute in 2026. That body is expected to help authorities monitor new risks and develop responses to threats from powerful generative AI tools, such as OpenAI's ChatGPT and Google's Gemini.
The rollout of the National AI Plan comes shortly after the government took a separate step to protect younger citizens online by banning access to social media for users under the age of 16.