Fifty years ago, the first Human Choice and Computers Conference (HCC) was organised, chaired by IFIP’s President Heinz Zemanek, with the proceedings edited by Mumford and Sackman. An overarching theme of that first conference was a concern over the way the participants felt people were being forced to use computers in dehumanizing ways. They argued that sociotechnical problems must be solved in ways that foreground the interests workers and communities and that, ultimately, human needs must take precedence over technological and economic considerations. Those concerns have never really left us and indeed are close to the theme of the 16th HCC.In recent months, we have seen the meteoric rise of Generative Artificial Intelligence (generally known as AI) tools like ChatGPT, ChatSonic, LaMDA, Neeva AI, DragonFly, etc.

As we claim to move towards a more human-centric Industry 5.0, these, and other, technological innovations are challenging many of the existing relationships and choices that exist between humans and computers. The challenges have been documented extensively in the popular press, but also in academia. The challenges exist at the juncture of human needs on the one hand, and technological and economic considerations on the other. While the technology may have changed, the events of 50 years ago still seem very fresh.

Some scholars and pundits are profoundly negative in their evaluations of AI technologies, suggesting that these tools will upend many aspects of the status quo in any domain where human creativity dominates, notably education, journalism, research, governance, and of course crime. Others, perhaps those with a Machiavellian inclination, are quick to see the advantages associated with the new technology and argue that developments and innovations of this kind cannot simply be stopped by fiat. Indeed, although they may have the potential to eliminate creative work, they are themselves the products of creative and fertile imaginations. Unsurprisingly, new tools (themselves premised on AI) that are claimed to detect AI-created materials have also emerged, perhaps initiating a ‘war’ between the two sides.

What we can expect is that just as the new technology may solve some problems, it may exacerbate others. For instance, as we noted in the call for papers for the previous HCC15 conference, the encroaching influence, of machine learning based systems, that can embed the biases inherent in the data they have learnt from, threatens to entrench the societal problems of the past, rather than redress them. A wide range of ethical issues are certainly associated with the new technology, which we suggest will prove to offer a cornucopia of new research opportunities.