How AI will be impacted by the biggest overhaul of Australia’s privacy laws in decades

By Jonathan Cohen
Principal
3 October 2023


Co-authors

Tom Moulder

By Jonathan Cohen - Principal | Co-authors Tom Moulder
3 October 2023

Share on LinkedIn
Share on Twitter
Share by Email
Copy Link


By Jonathan Cohen
3 October 2023

Share on LinkedIn
Share on Twitter
Share by Email
Copy Link

Co-authors Tom Moulder


After receiving more than 500 submissions, the Attorney General has released the Government's much-anticipated response to the consultation process for amending the Privacy Act 1988 (Cth). With the Government’s commitment to introduce legislative changes in 2024, we explore the key proposed changes that may impact AI and outline how organisations who use AI can prepare for these changes.

In a significant move to address concerns around consumer privacy protections, Attorney-General Mark Dreyfus has unveiled the Government’s response to the Privacy Act Review Report (the Review Report) which looks to bring Australian privacy laws in line with the rest of the world. As discussed in our previous article, we anticipate these proposed legislative amendments are likely to have fundamental impacts on the ways in which organisations are able to collect and use data within AI, machine learning and related processes.

We outline the key proposed changes that may impact AI and related processes, which the government has either agreed to or agreed to in principle, covering:

  • Expanding the scope of data that is considered to be personal information
  • Expanding consumer rights particularly around consent, use of data and rights of erasure and explanation
  • Requiring that the use of personal information is fair and reasonable.

Proposed changes are likely to impact how organisations collect and use data within AI

Expanding what’s personal

Several proposed changes to the Privacy Act seek to materially increase the spectrum of data that is considered to be personal information, capturing a far broader range of data that is often used in AI and related processes. The Government has agreed in principle to all of these changes, notably:

  • Proposal 4.1 significantly expands the range of information considered to be personal information by changing the requirement for data to be “about” an individual to simply being that it “relates to” an individual. This potentially captures most of the data that can be attached to an individual customer, though the Government has indicated that the Office of the Australian Information Commissioner (OAIC) will issue guidance to confine the connection to situations where it is not “too tenuous or remote”.
  • Proposal 4.3 expands the definition of “collection” to include inferred or generated data. The “inferred” aspect of this action is likely to capture a wide swathe of model outputs, subjecting them to materially increased governance requirements.
  • Proposal 4.9 (c) clarifies that sensitive information can be inferred from not sensitive information, subjecting it to enhanced protections and further complicating AI and data governance. For example, social media interaction data is not necessarily itself sensitive, but the outputs of models that use this data to infer features such as political opinions will likely be captured.

Taken together, these changes suggest organisations will likely need to significantly expand the reach and structure of their data governance practices, and carefully consider the nature of inferences being made by models, and how these inferences are governed and protected.

Enhancing consumer rights

A second category of proposed amendments to the legislation aims to provide greatly enhanced rights to consumers, in line with international trends such as the General Data Protection Regulation (GDPR) legislation in the EU.

Some of the proposed changes would provide significantly enhanced rights for consumers to determine how their data is used, including through more nuanced rights to provide and withdraw consent, and the right to erasure of personal data. Specifically, the Government has agreed in principle to the following proposals:

  • Proposal 11.1 amends the definition of consent to provide that it must be voluntary, informed, current, specific and unambiguous. Of these aspects, the “specific” component is likely to be of most interest for organisations who make wide use of customer data across multiple functions and processes, potentially requiring consent for the specific use cases.
  • Proposal 11.3 expressly recognises the ability for customers to withdraw consent, and to do so as easily as the provision of consent.
  • Proposal 18.3 provides a right for erasure of personal information on request from an individual, subject to some exceptions around public interest and various technical exceptions.

Some customers’ consent profiles will change over time and across use cases. The above proposals potentially establish requirements for careful tracking of exactly where individual customers’ data is used and the development of mechanisms to remove the data from processes where consent has been withdrawn. Moreover, there’s a question as to how far the requirement to delete information extends. For example, it is unclear whether the legislation would require AI models that have already been trained on customers’ data to be trained to “unlearn” it within the model. Depending on the model and data structure, such a requirement may introduce significant compliance challenges. As discussed in a previous article, a growing body of research seeks to address these challenges, though technical limitations remain.

Changes provide enhanced rights for consumers to determine how their personal data is used

A second class of proposals provides enhanced rights for consumers to receive explanations for how their data is used, including its use in automated decision-making processes, such as those built around AI.

  • Proposal 18.1 provides a right for individuals to access, and to an explanation about, their personal information if they request it, with Proposal 18.1(c) including a requirement that the organisation provides an explanation or summary of what has been done with the personal information.

Proposals 19.1, 19.2 and 19.3 set out more explicit rights to explanation for substantially automated decision procedures that have a “legal or similarly significant impact” on an individual’s rights, including a right to request “meaningful information” about how these decisions are made. Importantly, the Government has agreed to these proposals (compared to the less committal agreement ‘in principle’ for other proposals). The details underpinning these are passed to the OAIC to develop guidelines, and the Government notes an intention to align with the AI regulation under development by the Department of Industries, Science and Resources. We note that there are two key uncertainties in these requirements:

  1. What counts as a “legally or similarly significant effect” – the original consultation paper noted a potentially wide range of cases including decisions in relation to insurance underwriting, access to credit and heath care service allocation – it will be up to the OAIC to clarify.
  2. What is construed as a “meaningful explanation” of how the decision was made. For example, whether a high level discussion along the lines of “we consider a range of factors including x,y,z” will suffice or whether much more explicit and detailed information is required along the lines of which individual factors contributed to the decision and the extent to which they contributed.

Fair and reasonable use

Several of the proposed changes, which the Government has agreed to in principle, provide more explicit requirements on organisations to consider the use of personal information in each use case. Specifically:

  • Proposal 12.1 requires that the collection, use and disclosure of personal information is “fair and reasonable in the circumstances”.
  • Proposal 12.2 clarifies the considerations that need to be made in determining whether a use is fair and reasonable in the circumstances. Of these, we point out part (c) that states “whether the collection, use or disclosure is reasonably necessary or directly related for the functions and activities of the agency”.

Under these proposed changes, organisations would be prudent to have a careful approach to selecting features for inclusion in models, that includes a “fair and reasonable” aspect on top of more statistical bases for selecting model features.

Organisations would do well to prepare for these changes by reviewing how they capture, process and use customer data throughout existing and planned AI, machine learning and automated decision-making processes

What organisations should be considering

The Response to the Report highlights the changes that are likely to come with reforms to the Privacy Act, though many of the details still need to be refined through focused consultation in the lead up to introducing the updated legislation in 2024. Nevertheless, the Government has provided some clear signals around how the legislation is likely to shape up.

Organisations would do well to prepare for these changes by reviewing how they capture, process and use customer data throughout existing and planned AI, machine learning and automated decision-making processes. In particular, they should:

  1. Audit their AI and machine learning processes under the new definitions of personal information.
  2. Assess if data flow within these processes meets the “fair and reasonable” criteria and minimise unnecessary data uses.
  3. Evaluate processes against the “legally significant” benchmark and ensure decision transparency.
  4. Update data consent tracking systems to comply with the new requirements where required.
  5. Review and, where needed, amend guidelines for future development of AI, machine learning and substantially automated decision procedures to continue providing “privacy by design” guarantees.

Visit our Ethical AI and Governance page

 

For more information on the services we offer to support organisations ensure  AI, machine learning and automated decision-making processes are fit-for-purpose, compliant and build customer trust.


Other articles by
Jonathan Cohen

Other articles by Jonathan Cohen

More articles

Jonathan Cohen
Principal


How AI is transforming insurance

We break down where AI is making a difference in insurance, all the biggest developments we’re seeing and what's next for insurers

Read Article

Jonathan Cohen
Principal


What’s new in the Privacy Act review and what it may mean in practice

The latest updates and impacts of proposed changes to Australia’s Privacy Act, in the Government’s ongoing review

Read Article



Related articles

Related articles

More articles

Hugh Miller
Principal


Well, that generative AI thing got real pretty quickly

Six months ago, the world seemed to stop and take notice of generative AI. Hugh Miller sorts through the hype and fears to find clarity.

Read Article

Sarah Wood
ESG Advisor


Building cyber resilience – 4 critical steps for boards

How can organisations avoid the increasing threat of cyberattack? We joined a recent Actuaries Institute discussion and offer some key steps

Read Article