Please note: Some of the content on this page was published prior to the launch of Creative Australia and references the Australia Council. Read more.

Artificial Intelligence in Creative Industries and Practice

Join us for an engaging panel discussion about the developing intersection of AI and creative practices.

About the event

In this discussion, hosted by the Australia Council for the Arts, a panel featuring artist Xanthe Dobbie, Nyungar technologist and digital rights activist Kathryn Gledhill-Tucker, artist and researcher Dr. Nina Rajcic, and commercial and intellectual property lawyer Benjamin Duff, discussed the impact of AI on creative industries and practice, highlighting the intersections between Artificial Intelligence and creative industries and practice.

Read our summary below for the several major themes came out of this talk: the role of technology in the arts and collaborating with machines, copyright implications with emerging AI technologies and our responsibilities with developing AI.

James Purtill (facilitator)
James is the technology reporter for the ABC’s specialist science team. Since being introduced to GPT-3 a few years ago, he’s written stories charting the emergence of generative AI, from its uses in the arts and graphic design, to internet copywriting and student essay-writing.

Xanthe Dobbie
Xanthe Dobbie is an Australian artist and filmmaker. Working across on- and offline modes of making, their practice aims to capture the experience of contemporaneity as reflected through queer and feminist ideologies.

Benjamin Duff
Ben is a commercial and intellectual property lawyer at Maddocks with an avid interest in Artificial Intelligence and how it is changing the intellectual property landscape. Ben is involved in the provision of legal advice on a range of commercial issues including copyright, trade marks and ICT contracting to a range of Australian Government and private clients.

Kathryn Gledhill-Tucker
Kat is a Nyungar technologist, writer, digital rights activist currently living on Whadjuk Noongar boodjar. They are currently leading an initiative at Thoughtworks to grow and nurture a team of Aboriginal and Torres Strait Islander technologists.

Dr Nina Rajcic
Nina Rajcic is an interdisciplinary artist, researcher, and developer exploring new possibilities of human-machine relationships. Her recent works draw inspiration from the link between language and the self, exploring the role of narrative in the synthesising of meaning and the constructing of identity.

The panellists discussed how AI can change the way artists and creatives work, including how they collaborate and how they conceptualize and execute their ideas.

‘AI is just another tool… It’s just a mode of making and a different way to view or enhance your practice. Or just like done, use it. Who cares? There’s always gonna be other stuff.” – Xanthe Dobbie

One of the main themes discussed was the potential of AI to expand the range of possibilities for artists and creatives. By automating administrative tasks, AI can free up time and resources for artists to focus on projects. In her current research at SensiLab Monash, Nina Racjic mentioned, a lot of artists didn’t really mind the idea of using the technology to replace some of the grunt work to allow themselves more time to ‘focus more on the actual creative pursuits’.

“[AI] is amazing at doing content. But it doesn’t necessarily mean that it’s doing art.”
– Nina Racjic

The panel also discussed questions around the originality of AI work and questions around whether something created by AI can be truly new and the potential for AI to democratise the creative process by making it more accessible to a wider range of people. However, without a critical engagement with the technology and a desire to push the work further, work created by AI can often feel hollow. When speaking on the use of AI in university, Xanthe said that their students were ‘not able to create art with it, as they got so wrapped up in the idea of being able to just, like, put a text prompt into a form and get an image back, and it was like they weren’t able to push past that step.’

Though this use of AI comes with its own draw backs, Benjamin Duff noted that a lot of the work that could be done by AI is also the work currently being done by entry level employees, so the implementation of AI could further eliminate paid positions for new workers.

In Australia, the intersection between AI and copyright laws in the creative industries is becoming increasingly complex. With the rise of AI-generated content, questions are being raised about who owns the copyright to such works. When accessing copyright laws, practitioners need to consider: what type of work is it and is it covered by the law, what level of intellectual effort has been applied to the prompt, and what is the AI trained on?

While current copyright laws apply to works created by human authors, it is unclear how they apply to AI-generated works. As Ben mentioned, as it stands there is still an argument that can be made that the creator of the AI owns all the work it generates, even though this is far-fetched.

“[Claiming copy right] requires some sort of manual input by the human and it can’t just be just dragging files or sound – it needs to be some sort of shaping or directing of the material”

– Benjamin Duff

It’s also useful to recognise that the Copyright Act explicitly mentions mediums for arts forms, for example photographs and cinema works. In the case of work generated by Artificial Intelligence, the law may be heading in the same direction of labelling it as its own separate medium to give clarity under current copyright laws in Australia.

“Technology moves faster than legislative reform.”

– Kathryn Gledhill-Tucker

The panellists reminded everyone to submit their review of the Copyright Act to the Australian Government as they are doing a review on this law.

AI sources its information from a dataset. This means it scrapes everything on the internet to inform its output.

One issue is the collection and use of personal data. AI algorithms require vast amounts of data to train and improve their accuracy, which can include personal information such as biometric data, health records, and online activity. There is a risk that this data can be accessed and misused by unauthorised individuals or organisations. This can lead to privacy violations because although this data is public, it does not necessarily mean it’s ethically sourced. Moreover, the panel raised the idea that AI tools are not a non-biased generators.

“[The data] going to be as biased as the data that we put into them. And we need to be very mindful of that when we’re assessing any of its output. It’s going to be as wide as colonial, as capitalist, as the underlying data set that is being fed into it. And that’s going to be replicated when it’s output.”

– Kathryn Gledhill-Tucker

With all these points in mind the panellists agreed it is important to approach it with a critical and reflective mindset.