A newly published white paper has conducted an in-depth mapping of the main potentials and challenges of AI applications in the media cycle, providing a unique overview of the state-of-the-art discussion of societal impacts of AI. Based on this mapping, some provisional guidelines and considerations are distilled to guide the future work of industry professionals, policy makers and researchers.
The white paper has been produced by researchers from the University of Amsterdam, The Institute for Sound and Vision and KU Leuven as part of the AI4Media project and is based on a thorough literature review of academic journals published by scholars within the field of humanities, social science, media and legal studies. As well as, reports developed either with a specific focus on AI in the media sector or with a broader outlook on AI in society.
The white paper is divided into two major parts. The first part identifies the main potentials and challenges across the entirety of the media cycle including i) ideation and content, ii) gathering, iii) media content production, iv) media content curation and distribution, v) deliberation over the content, and vi) archival practices. The second part explores six societal concerns that affect or impact the media industry. These included:
- Biases and discrimination: AI is on one hand discussed as a potential solution to mitigating existing media biases (e.g., overrepresentation of male sources). On the other hand, there is also concern about how AI systems might sustain or further enhance existing biases (e.g., in content moderation where minorities are less protected from hate speech) and how that might have severe long-term effects on the role of media in society and the democratic practices it cultivates.
- Media (in)dependence and commercialisation: The “platformisation” of society also applies to the media sector, which is dependent on e.g., social media in their distribution of content and entangled in commercial data infrastructures. One major concern regarding this commercialisation and dependence on different platforms is the effects of such dependencies on media independence.
- Inequalities in access to AI: While the use of AI is expanding rapidly, it is not doing so equally across the world. The primary benefactors of AI solutions remain to be the global north and particularly English-speaking countries. Inequality in access is, therefore, also a major concern. In the media sector this is also further widening, because of the existing competitional differences between smaller and larger media organisations, which could reduce media diversity.
- Labour displacements, monitoring, and professional control: AI is often discussed in terms of the risk of labour displacement. In the media sector, the effects of AI on existing jobs remain limited, although some examples of displacement are emerging. However, AI also induces new power asymmetries between employees and employers as metrics and monitoring practices are becoming more common. Last, AI is transforming existing media practices (e.g., genres and formats) and challenging the professional control and oversight of both production and distribution practices.
- Privacy, transparency, accountability, and liability: The privacy discussion regarding AI for media relate mostly to data privacy, where the conflict between commercial and democratic ideals intersects. Media organisations must consider their responsibility regarding data privacy models and new best practices of responsible data practices are needed. Transparency is mainly discussed regarding the practices of disclosure that media organisations currently employ and how streamlining is needed to ensure better transparency in the media landscape. Accountability is mainly discussed in relation to how and where to place responsibility as new actors enter the media landscape with the use of AI (e.g., service providers of AI).
- Manipulation and mis- and disinformation as an institutional threat: The threat of manipulation is highly present in the discussion of AI and media as well as in society at large through concepts such as ‘fake news’. In the media sector specifically, much discussion centre on how other actors through the manipulation of content (e.g., deep fakes) or by affecting modes of distribution (e.g., bots) can manipulate public opinion. As media continue to serve an important role in society as trusted sources of information, the negative effects this might have on the trustworthiness of media are significant. As a core actor in the fight against disinformation, the development of tools to support the work of media professionals is important.
In the white paper, these discussions are further fleshed out and core points of consideration for the media industry, policy makers and AI researchers who engage with the media sector are suggested to help guide future work and research on AI.
Access the full white paper HERE.
A second version of the whitepaper will be developed and published in December 2023, in this version some of these core points of consideration will be further explored and qualified through workshops with relevant media organizations who can help provide even more concrete suggestions of best practices.
Author: Anna Schjøtt Hansen, (University of Amsterdam)