Regulation is key to responsible AI, but what might this look like?

19 April 2023

By Dr Caitlin Curtis and Dr Steven Lockey, The University of Queensland

Generative AI – including chatbots (such as ChatGPT and Bard) and image generators (such as DALL-E and Stable Diffusion) – are a hot topic that has sparked many conversations about the benefits and risks of Artificial Intelligence (AI) tools in society.   

AI systems increasingly impact our lives, they are transforming the way work is done and how services are delivered. AI can enhance efficiency, effectiveness, personalisation, and reduce costs. However, people also have raised concerns such as job displacement, bias, privacy, and misuse, which have been fuelled by high profile cases of AI use that were biased, discriminatory, manipulative, unlawful or violated human rights. One of the reasons these risks may be heightened is insufficient regulation (Curtis, Gillespie, & Lockey 2022).  

But how much do people trust in AI-enabled systems, and what are their expectations with respect to regulation and oversight? This is important to understand because in order to fully realise the benefits of AI, people need to be confident that AI systems are developed, used, and governed in a responsible and trustworthy manner.  

In 2022 we surveyed over 1000 people in Australia, nationally representative on gender, age, and location, as part of a wider survey of public attitudes toward artificial intelligence across 17 countries (Gillespie et al., 2023).  

We found that trust in AI systems is relatively low in Australia, with only 34% of people indicating that they are willing to trust AI systems. Whilst many Australians believe AI will have benefits such as improved efficiency (78%), innovation (75%) effectiveness (68%) and reduced costs (66%), they also have a variety of concerns about the risks. Top among these are cybersecurity risks (84%), job loss due to automation (78%), and loss of privacy (75%). Less than half (44%) of Australians agree that the benefits of AI outweigh the risks (Gillespie et al., 2023).   

We found strong support from Australians for trustworthy AI principles, such as privacy and security, transparency and explainability, robustness and safety, and fairness and non-discrimination – like those found in Australia’s AI Ethics framework (2019). However, our research indicates that only 35% of Australians feel there are sufficient laws, governmental processes, or safeguards in place to make the use of AI safe (Gillespie et al., 2023). We found no substantive difference in this belief compared to 2020, when we last surveyed Australians’ perceptions of AI (Lockey et al., 2021), suggesting that Australians are no more confident that there are sufficient safeguards around AI than they were two years prior. In Australia, most agree (72%) that AI should be regulated by an independent, AI regulator – which aligns with recommendations from the Australian Human Rights Commission (2021) – and the government and/or existing regulators (72%).   

Examples of AI governance

What could regulation of AI look like? Examples of AI governance and AI regulation are beginning to emerge, with some countries implementing laws and regulations to ensure its responsible development and use.  

One significant development in this regard is the European Union’s proposed AI Act, which is expected to come into effect in 2024. The AI Act aims to establish a common European framework for AI regulation that addresses the risks associated with AI while fostering innovation.  

Under the (proposed) AI Act, AI applications would be regulated based on the level of risk they pose. High-risk applications, such as those used in critical infrastructure or for law enforcement purposes, would be subject to strict regulations, including requirements for human oversight and transparency. Lower-risk applications would still be subject to requirements, such as documentation and transparency, but would face less strict regulation overall. Certain AI applications would be banned altogether, such as those that manipulate people’s behaviour in a way that could be harmful or those that use biometric data to identify individuals in public spaces without their consent.  

The AI Act would also impose new requirements on AI developers, including requirements for robustness, accuracy, and cybersecurity, as well as data protection and privacy requirements. These requirements aim to ensure that AI is developed and used responsibly while protecting the rights and privacy of individuals.  

The EU’s General Data Protection Regulation (GDPR) also plays a role in regulating AI. The GDPR requires companies to obtain consent from individuals before collecting and using their personal data, which is a critical component of AI development. The GDPR also establishes requirements for the protection of personal data, including requirements for data minimization, accuracy, and security. These requirements aim to ensure that personal data is protected and used in a way that is transparent and accountable. In Australia, our Privacy Act is currently under review, which gives us an opportunity to make it more fit-for-purpose for the digital age, and ensure that Australians’ expectations around AI are met. For example, we support aligning the definition of personal information more closely to that in the GDPR, to increase protection for Australians and ensure that their expectations around data use are upheld (Curtis, Gillespie, & Lockey 2020).  

In the United States, there is no federal law specifically regulating AI. However individual localities, including New York, have established bespoke legislation around specific types of AI use, such as the use of AI and algorithm-based technologies in recruiting, hiring or promotion without first being audited for bias (New York City Local Law 144 (2021)).  

The speed of innovation and applications around AI has sparked concerns about its potential impact on society. Australia has the opportunity to act now to ensure that AI development and use is governed in ways that align with human values, prevent harm, and safeguard public trust. We advise regulators look to other jurisdictions for guidance. A risk-based approach to AI regulation – as per the EU AI Act – would be a good place to start.  

References

Curtis, C., Gillespie, N. and Lockey, S. 2022. AI-deploying organizations are key to addressing ‘perfect storm’ of AI risks. AI and Ethics 3, 145–153. doi: 10.1007/s43681-022-00163-7   

Gillespie, N., Lockey, S., Curtis, C., Pool, J., and Akbari, A. 2023 Trust in Artificial Intelligence: A Global Study. The University of Queensland and KPMG Australia. doi: 10.14264/000d3c94.  

Lockey, S., Gillespie, N., and Curtis, C. 2021. Trust in Artificial Intelligence: Australian Insights. The University of Queensland and KPMG Australia. doi:  10.14264/b32f129  

Australian Human Rights Commission. 2021. Human Rights and Technology Report. 240pp. https://humanrights.gov.au/our-work/rights-and-freedoms/publications/human-rights-and-technology-final-report-2021  

European Commission AI Act. Proposal for a Regulation laying down harmonised rules on artificial intelligence | Shaping Europe’s digital future”. digital-strategy.ec.europa.eu/en  

General Data Protection Regulation: https://gdpr-info.eu/  

Curtis, C., N. Gillespie, and S. Lockey. 2020. “Submission to the Review of the Privacy Act 1988 (Cth) Australian Government: Attorney-General’s Department.” The University of Queensland. doi: 10.14264/501b50f  

New York City Local Law 144. Automated employment decision tools. 2021.  

https://legistar.council.nyc.gov/LegislationDetail.aspx?ID=4344524&GUID=B051915D-A9AC-451E-81F8-6596032FA3F9&Options=ID%7CText%7C&Search=  

_________________________________________________________________________________

This article is republished from the National Regulators Community of Practice (NRCoP), auspiced by ANZSOG, for their Regulation, Policy and Practice Newsletter.  Read the original article.

Latest