What’s new in the Privacy Act review and what it may mean in practice

By Jonathan Cohen
Principal
5 May 2022


Co-author

Stephanie Russell

By Jonathan Cohen - Principal | Co-author Stephanie Russell
5 May 2022

URL has been copied successfully!
URL has been copied successfully!


By Jonathan Cohen
5 May 2022

URL has been copied successfully!
URL has been copied successfully!

Co-author Stephanie Russell


In this follow-up to our two-part series on privacy in the age of big data, we explore the latest developments in the Australian Government’s ongoing review of the Privacy Act 1988, and how the most recent updates to proposed changes may impact people and organisations.

In its aim to ‘ensure privacy settings empower consumers, protect their data and best serve the Australian economy’, the Government recently released a discussion paper as part of its Privacy Act review. The paper puts forward further reform options and refinements, including to three changes we outlined in Part 1 of our series for how they may impact machine learning models, the industries who use them and the consumers they target. These proposed changes are:

  • Expanding the definition of personal information
  • Strengthening consent requirements
  • Introducing and applying a person’s ‘right to erasure’ of their personal information.

Importantly, the Privacy Act review is occurring alongside the Online Privacy Bill, a binding code for social media and other online platforms, which will also increase penalties and enhance enforcement measures. Additionally, changes identified as more urgent will be prioritised and implemented through this new Bill.

This broader scope will increase the compliance burden for organisations

We now take look at the updates to the three main proposed changes we covered earlier, together with some new options for reform, and how all of these may affect businesses if implemented.

Getting personal – the definition of ‘personal information’

What we highlighted previously

A proposed change to expand the definition of personal information to include technical data and ‘inferred’ data, representing a fundamental expansion in what organisations might think of as personal information. The consequences for machine learning model regulation is potentially hugely significant for consumers and organisations, if models and their outputs are ensnared in restrictive governance requirements.

What’s happening now

Following positive support from public submissions, the latest set of proposals in the discussion paper still includes amending the definition of personal information to make it clearer that it includes technical and inferred information.

Reducing uncertainty

In addition, the paper has also proposed to refine the definition wording further by replacing ‘about’ with ‘relates to’ in the context of an identified individual’s information. This proposed change is designed to reduce uncertainty and capture a broader range of technical information from which a person could be identified, even if the information is primarily about something else – such as the person’s telecommunications use.

Increased compliance burden

This broader scope will increase the compliance burden for organisations as a higher volume of information may now be subject to more rights and obligations than previously. At this stage, the potential consequences for machine learning model regulation remain to be seen and will hinge on whether the definition of inferred data extends in practice to model outputs, and even models themselves.

Saying ‘I do’ – strengthening consent requirements

What we highlighted previously

The strengthening of consent requirements through pro-consumer defaults may lead to consent fatigue for consumers and potentially limit the data available to organisations for future training of machine learning models.

What’s happening now

In response to concerns raised in public feedback related to pro-consumer defaults, the Government has proposed an alternative second option, which requires easily accessible privacy settings that provide individuals with an obvious and clear way to set all privacy controls to the most restrictive, such as through a single-click mechanism.

Pro-privacy default settings – a less restrictive option

It’s plausible that a less stringent alternative will be adopted, with the possibility of requiring pro-privacy default settings only under certain circumstances, such as if the personal information is sensitive or about a child. In this case, we expect the potential consequences for machine learning models, such as limited data and misleading model insights, to be of less material concern.

An alternative second option requires easily accessible privacy settings [for consumers]

Consent for children and vulnerable people

Although the mandatory pro-privacy defaults may not go ahead, strengthening of consent requirements for children and vulnerable individuals is proposed to be implemented through the binding code introduced by the Online Privacy Bill. Under this legislation, parental consent will be explicitly required for data related to children under the age of 16, and the collection or use must be reasonable and ‘have the best interests of the child as the primary consideration’. For organisations working for or in the public sector, this may result in increased compliance cost for analytics and machine learning models involving children or vulnerable individuals.

Not partnered for life – right to erasure of data

What we highlighted previously

The introduction and extension of the application of a ‘right to erasure’ to machine learning models would be a very tall order for organisations, with the cost and time required to comply with erasure requests rapidly outweighing the benefits of using a machine learning model at all.

What’s happening now

To address the challenges raised in public submissions, the Government has amended its proposal by introducing the right to erasure only under a limited set of circumstances.

Under the latest proposal set out in the discussion paper, individuals may request erasure of personal information only where one of the following grounds applies:

  • The personal information must be destroyed or de-identified as it is no longer required for business purposes
  • The personal information is sensitive
  • The individual has successfully objected to personal information handling through the ‘right to object’ (a new proposal, which we discuss later in our section on marketing)
  • The personal information has been collected, used or disclosed unlawfully
  • The entity is required by Australian law or a court order to destroy the information
  • The personal information relates to a child.

Should a person’s right to erasure be further limited?

In practice, the situations under which a person could utilise their right to erasure may be limited even further, as the Government proposes to provide for exceptions to the above, such as in instances where personal information is required for a contract or where erasure would be technically impractical and constitute an unreasonable burden.

The Government is now considering industry feedback submissions on what exceptions should apply to address the concerns raised about freedom of speech, challenges during law enforcement and practical difficulties for industry.

It seems likely the legislation will be tightened most significantly for targeted advertising

Nevertheless, a question remains: how will the right to erasure apply to machine learning models and what will ‘unreasonable’ mean in practice? An important part of this question is whether the right to erasure will extend to the ‘unlearning’ techniques discussed in Part 2 of our series.

Some new options – a crackdown on marketing

On the other hand, it appears the Government is largely supportive of giving individuals more control of how their data is collected and used for direct marketing and targeted advertising.

The right to object

One of the latest proposed changes, put forward in the discussion paper, is the introduction of the ‘right to object’, which gives a person an unqualified right to object to any collection, use or disclosure of personal information by an organisation for the purpose of direct marketing (see page 133 of the discussion paper). Upon successful objection, the person will also be able to exercise their ‘right to erasure’. It appears the Government’s focus is marketing, with these requirements to be included in the code of the Online Privacy Bill to immediately address the current gap in legislation.

Protections tightening around the world

This tightening of protections aimed at direct targeted advertising is mirrored in global legislative changes. Europe’s General Data Protection Regulation, or GDPR, already specifies the absolute right to reject processing of data for the purposes of direct marketing. Similarly, the US is introducing a new Filter Bubble Transparency Act, which will require internet platforms to allow users to easily switch to using a version of their platform that does not include any ‘opaque algorithms’ (algorithms that alter what the user sees based on their user-specific data).

Should Australia join in?

The implementation of similar objection rights to direct marketing in Australia would require firms to establish processes to exclude a person from their marketing or profiling algorithms. The Government is considering further industry feedback submitted on whether this right to object should extend to include data where the personal information is collected and used in aggregated cohorts instead of individuals, and whether loyalty schemes, which offer customers tangible benefits, should be regulated differently.

Where the privacy journey is headed

Although the discussion paper doesn’t resolve all the issues around the implications for machine learning we raised in Part 1 of our Privacy Act series, it does give some stronger hints about the direction of the amended Privacy Act. At this stage, it seems likely the legislation will be tightened most significantly for targeted advertising, but it is unclear whether predictive model outputs will be included in the ambit of ‘inferred data’, the consequences of which will be significant for organisations.

For now, organisations can continue to take practical steps, as we outlined in Part 1, to assess and reduce potential privacy implications for their machine learning models and pipelines. This will highlight the potential exposure to and impact from potential changes and ensure they are better prepared, whatever the final outcome of the Privacy Act review.


Other articles by
Jonathan Cohen

Other articles by Jonathan Cohen

More articles

Jonathan Cohen
Principal


How AI is transforming insurance

We break down where AI is making a difference in insurance, all the biggest developments we’re seeing and what's next for insurers

Read Article

Jonathan Cohen
Principal


How AI will be impacted by the biggest overhaul of Australia’s privacy laws in decades

Understand the key changes to the Privacy Act 1988 that may impact AI and how organisations who use AI can prepare for these changes.

Read Article



Related articles

Related articles

More articles

Jonathan Cohen
Principal


How AI will be impacted by the biggest overhaul of Australia’s privacy laws in decades

Understand the key changes to the Privacy Act 1988 that may impact AI and how organisations who use AI can prepare for these changes.

Read Article

Hugh Miller
Principal


Well, that generative AI thing got real pretty quickly

Six months ago, the world seemed to stop and take notice of generative AI. Hugh Miller sorts through the hype and fears to find clarity.

Read Article