To reap the benefits of artificial intelligence (AI) it is crucial that all Australians, from organizations to consumers, have trust in the technology. Trust is central to human-centred AI. However, many high-profile instances have been documented of biased and harmful systems have been documented, and concerns have been raised about the ethical issues (and risks) of AI systems (Australian Human Rights & Tech Report 2021; Gillespie, Lockey & Curtis 2021; UN Human Rights Council Report, 2021). Issues have been raised relating to bias, fairness, equality, transparency, empathy, dignity, privacy, human control/oversight, sustainability, and reliability.
This project is a field study of AI creators, exploring the gap between trustworthy AI ideals and what happens in practice. The project builds on work and ideas that have been initially developed with support from a UQ multi-disciplinary Human-Centred AI strategic fund in 2021 that included collaborators from UQ Business, ITEE, and Psychology, and would extend this work. A mixed-method approach including an online survey and participant interviews will be used to better understand whether the AI developer community does or doesn’t engage with trustworthy AI practices (and why), and identify what contextual factors influence this, including organization size/type/sector, exposure to ethics training, and themes such as organizational pressure, empowerment, psychological safety.