The data economy is not going away, it’s accelerating and already driving massive structural shifts that have the potential to change Australia more profoundly than electricity, rail or internet connectivity.
AI holds enormous promise to solve Australia’s most significant problems, such as climate change, cybersecurity, personalised medicine, more efficient cities, and sustainable energy and food. It’s expected to contribute about $US13 trillion ($18 trillion) of additional global economic activity by 2030. Arguably much of the opportunity framing until now has been on the economic benefits, which are critical but only if they also deliver societal benefit. So far, the jury is out.
The next stage of AI has potential to be an experiment with democracy, free will and unassailable data economics unless we get ahead of the legal and ethical considerations, globally and in Australia.
In our work as the digital and data specialist arm of Australia’s national science agency we see this first hand and up close every day, as domestic partners grapple to come to terms with what’s going on globally and begin to understand the existential threat data economics are to their business. Other countries are even starting to come together to share data used to “train” these systems in areas such as genomics and cybersecurity to achieve scale in order to compete and lay the seeds for new domestic industries.
Monopolies of the past had enormous economic power, but now is different. Today’s AI-enabled platforms have unprecedented data collection capabilities, as well as the channel to deliver the resulting services using carefully architected and unchecked methods designed to take advantage of our human vulnerabilities. They have potential to manipulate our cognitive biases at scale, and distort our realities and decision making. This can be seen in Cambridge Analytica’s alleged influence on the US presidential campaign and on Brexit, at some level challenging the democratic process itself.
It’s questionable whether the unintended consequences of these platforms can be reversed in the short term because of the limitations of the underlying AI systems to automate the detection of rogue behaviour that can perturb the way the platforms function. The far-reaching implications are an indicator of challenges in other AI emerging application areas. It also presents an opportunity if we move more decisively.
We can play a global leadership role around some of these issues. Australia’s egalitarian culture and per capita strength in machine learning and other aspects of AI such as computer vision and robotics afford us that potential. That’s why CSIRO’s Data61 led the development of a National AI Ethics Framework in consultation with the Department of Industry, Innovation and Science, along with industry and academia. It’s now in the public domain for comment by all interested Australians. We’re hoping it’s the start of a more robust and much needed national conversation about data economics and the implications.
AI systems should generate net benefits for society that outweigh the costs and must not be designed to harm or deceive people. They should comply with all relevant regulations, ensure personal private data is protected and kept confidential, while preventing data breaches that could cause reputational, emotional, financial professional or other types of harm.
AI systems must not result in unfair discrimination against individuals, communities or groups. They must be informed when an algorithm is being used that impacts them and they should be told what information and “training data” the algorithm uses to make decisions. They should also be allowed to challenge the use of the algorithm.
People and organisations responsible for the creation and implementation of AI algorithms should be identifiable and accountable for the impacts of that algorithm even if the impact is unintended.
Today’s AI systems on the whole do not operate like this at all, and the creators are not held to this sort of account. In many cases it is not because they are unwilling but because AI technology has developed so quickly that we have not been able to conceive the consequences, intended or otherwise. The response, including the automated detection of rogue behaviours will require new research, techniques and tools and new industry approaches, but these are necessary investments.
Adrian Turner is chief executive of CSIRO’s Data61.