What the Privacy Act 2024 Amendments Mean for Your AI Systems
The Privacy and Other Legislation Amendment (Enforcement and Other Measures) Act 2024 represents the most significant update to Australian privacy law in over a decade. For businesses using AI — or planning to — the amendments introduce specific requirements around automated decision-making that will fundamentally change how AI systems need to be designed, documented, and governed.
With key provisions commencing in December 2026, Australian businesses have a limited window to prepare. Here is what you need to know.
What Changed
The 2024 amendments strengthen the Privacy Act 1988 across several fronts, but three changes are particularly relevant to businesses deploying AI systems.
First, the amendments introduce transparency obligations for automated decision-making. Organisations that use automated systems to make decisions that substantially affect individuals must disclose that automation is being used, explain the logic involved in a meaningful way, and provide a pathway for human review.
Second, the amendments substantially increase penalties for privacy breaches. The maximum penalty for serious or repeated interferences with privacy has been raised to the greater of $50 million, three times the value of any benefit obtained from the breach, or 30 per cent of the entity's adjusted turnover in the relevant period. These are among the highest privacy penalties globally, comparable to the EU's GDPR regime.
Third, the Office of the Australian Information Commissioner (OAIC) receives expanded enforcement powers, including the ability to issue infringement notices and conduct assessments of automated decision-making systems without prior complaint.
What Counts as an "Automated Decision"
The amendments define an automated decision broadly. Any decision made by an automated system — without meaningful human involvement — that significantly affects an individual's rights, interests, or obligations falls within scope. This includes decisions about:
- Credit applications and loan approvals
- Insurance pricing and claims assessments
- Employment screening and recruitment shortlisting
- Customer eligibility for products or services
- Content moderation and account actions
- Government benefit determinations
Importantly, a decision does not need to be fully automated to be caught. If an AI system generates a recommendation that a human routinely approves without genuine independent assessment — so-called "rubber-stamping" — that is likely to be treated as an automated decision under the new rules.
Conversely, AI systems used purely for internal analytics, content generation, or operational optimisation that do not directly affect individuals' rights are unlikely to trigger the transparency requirements. The key test is whether the output substantially affects an identifiable individual.
The Penalty Landscape
The penalty increases deserve particular attention. At up to $50 million or 30 per cent of turnover, the consequences of non-compliance are existential for mid-market businesses and material even for large enterprises. The OAIC has signalled that it intends to use these enhanced penalties, particularly in cases involving automated systems where organisations have failed to implement adequate governance.
The "three times the benefit obtained" measure is also notable. If an AI system generates $20 million in value for your business through automated decisions that breach the Act, you could face a $60 million penalty. This changes the cost-benefit calculation for cutting corners on AI governance.
Steps Businesses Should Take Now
With the automated decision-making provisions commencing in December 2026, businesses should be acting now. Here are the practical steps we recommend.
Audit your AI systems. Identify every automated system that makes or contributes to decisions affecting individuals. This includes AI chatbots that handle customer complaints, recommendation engines that determine pricing, screening tools used in HR, and any system where an AI output directly or indirectly affects a person's access to services, products, or opportunities.
Classify decisions by risk. Not all automated decisions carry the same risk. Prioritise systems that make high-impact decisions — those affecting financial outcomes, employment, insurance, or access to essential services. These will face the most scrutiny and require the most robust governance.
Document your logic. The transparency requirements demand that you can explain the logic of your automated decisions in a meaningful way. This means documenting what data the system uses, how it processes that data, what factors influence the output, and what safeguards are in place. "It is a machine learning model" is not a sufficient explanation.
Implement human review pathways. Affected individuals will have the right to request human review of automated decisions. You need a clear, accessible process for this. That means training staff to conduct genuine reviews — not just confirming what the AI said — and having the technical capability to explain individual decisions.
Review your data practices. The amendments also tighten requirements around data collection, use, and disclosure. Ensure that the data feeding your AI systems was collected with appropriate consent and is being used consistently with the purpose for which it was collected. Pay particular attention to AI systems trained on or using personal information.
Update your privacy policy. Your privacy policy will need to disclose the use of automated decision-making, the types of decisions involved, and how individuals can request human review. Start drafting these updates now.
How AI Governance Frameworks Help
A well-designed AI governance framework addresses most of these requirements systematically rather than on a system-by-system basis. Good governance includes a register of all AI systems with their risk classifications, documented model cards and decision logic for each system, defined roles and responsibilities for AI oversight, regular auditing and monitoring processes, incident response procedures for AI failures, and clear escalation paths for human review.
The investment in governance pays for itself through reduced compliance risk, faster deployment of new AI systems (because the guardrails are already in place), and greater confidence from customers, regulators, and boards.
Businesses that already operate under frameworks aligned with the NIST AI Risk Management Framework, the ISO/IEC 42001 AI Management System standard, or the Australian Government's Voluntary AI Safety Standard will find they have a significant head start.
How OzAI Can Help
At OzAI, we help Australian businesses build AI governance frameworks that are practical, proportionate, and aligned with the evolving regulatory landscape. We do not believe in governance for governance's sake — we build frameworks that protect your business while enabling you to deploy AI confidently and quickly.
Our consulting team can audit your existing AI systems, assess your compliance readiness against the 2024 amendments, and build a governance framework tailored to your organisation's size, industry, and risk profile. We also help with the technical implementation — ensuring your AI systems are designed for explainability and human review from the ground up.
If you want to understand your exposure and start preparing, book a discovery call with our team.