First-ever European Media Industry Outlook published by the European Commission

The Commission published the first-ever European Media Industry Outlook, analysing trends in the audiovisual, video game and news media industries. Commissioner for Internal Market Thierry Breton presented the report at the European Film Forum, organised at the Festival de Cannes.

The Media Outlook report provides market data and identifies challenges and underlying technological trends common to the media industries. Among other findings, it stresses the structural impact of the ongoing shift in media consumption in favour of digital players. According to the report, growth is mostly driven by segments such as video on demand (VoD), mobile gaming or immersive content.

Copyright European Commission

The report also highlights the relevance of strategic assets such as intellectual property rights (IP) for media companies and how the retention, acquisition and exploitation of these rights can help increase revenues, invest or remain independent. It also stresses that an early yet wise uptake of innovative technologies and techniques (e.g. AI virtual production) is fundamental to adapt, opening up new markets and becoming more competitive. Moreover, audience-driven strategies should serve as a basis for building successful business models.

Access to the European Media Industry Outlook report HERE.

Launch of the AI Media Observatory

In May 2023 the AI4Media consortium launched the beta-version of the European AI Media Observatory. The Observatory will serve as a knowledge platform that monitors and curates relevant research on AI in media, provides expert perspectives on the potentials and challenges that AI poses for the media sector and allows stakeholders to easily get in touch with relevant experts in the field via our directory.
The Observatory builds on the expertise of more than 30 leading research and industry partners in the field of AI in media.

The newly launched Observatory is envisioned to be a one-stop-shop for industry, civil society and policy makers who are interested in the implications of AI in the media sector. The aim of the Observatory is to support the ongoing efforts of the multidisciplinary community of professionals who are working towards ensuring responsible use of AI in the media sector and contribute to the broader discussion and understanding of the development and use of AI in the sector and its impacts on society, economy and people.

The Observatory features three main components; ‘Your AI Media Feed’, ‘Let’s Talk AI and Media’ and ‘Find and AI Media Expert’ – along with an overview of relevant upcoming events.

  •   ‘Your AI Media Feed’ is a content site that contains the latest content on AI in the media, focusing on emerging trends in the sector, changes in the policy landscape and the societal implications of AI as well as approaches to social and ethical AI.
  •   ‘Let’s Talk AI and Media’ is a video site that features relevant talks, roundtable discussion, and presentations by experts in the field. The site provides an easily accessible entry point to gain insights into ongoing topics and debates in the field.
  •   ‘Find and AI Media Expert’ is an expert directory, where you can easily search, find, and contact a relevant technical, legal, or social expert within the field of media and AI. If you work in this field, you are also welcomed to sign up to be featured on the directory.

The Observatory features both content produced as part of the AI4Media project, but also relevant external content. If you know of relevant content, you are welcomed to submit it via the form on the site. All featured content is curated by the editorial board of the Observatory according to the outlined editorial principles. Experts who sign up to be featured on the directory, equally undergo a check of relevance by the editorial board.

The current version of the observatory is a beta-version and as a result not all parts are fully developed, such as the expert directory, which will be deployed in full in the coming months. It will also undergo adjustments and improvements based on feedback in the period up to the official launch. The final version of the Observatory will be launched in October 2023.

Will the Digital Services Act (DSA) revolutionise the internet? The present and the future of algorithmic content moderation.

First, the Deliverable D6.2 “Report for Policy on Content Moderation” introduces the concept of “algorithmic content moderation” and explains how matching and classification (or prediction) systems are used to make a decision about content removal, geoblocking, or  account takedown. It then provides the overview of challenges and limitations of automation in content moderation: the lack of context differentiation, the lack of representative, well-annotated datasets to use for machine learning training and a difficulty to computationally encode sociopolitical concepts such as “hate speech” or “disinformation”.

The tensions between content moderation and the fundamental human right to freedom of expression is another research theme. The right to freedom of expression in Europe is enshrined in Article 10 of the European Convention on Human Rights (ECHR) and Article 11 of the EU Charter on Fundamental Rights (ECFR) and includes the right to freely express opinions, views, and ideas and to seek, receive and impart information regardless of frontiers. The use of algorithmic content moderation tools may undermine freedom of information since that system might not distinguish adequately between lawful and unlawful content, leading to the over-blocking of lawful communications. On the other hand, the under-removal of certain types of content results in a failure to address hate speech and may create a “chilling effect” on some individuals’ and groups’ willingness to participate in online debate.

Second, the report analyses the EU legal landscape concerning content moderation along two dimensions. First, the horizontal rules, which apply to all types of content: the e-Commerce Directive, the newly adopted Digital Services Act (DSA) and the Audio-Visual Media Services Directive (AVMSD) that imposes obligations on video-sharing platforms. Next, it focuses on rules which apply to specific types of content: terrorist content, child sexual abuse material (CSAM), copyright infringing content, racist and xenophobic content, disinformation, and hate speech. For each of the initiatives, the report provides a description of the main concepts, a critical assessment and future-oriented recommendations.

The Digital Services Act (DSA), which entered into force on 16 November 2022, is subject to detailed analysis given its recency and novelty.  The main aims of the new rules are to:

  • Establish a horizontal framework for regulatory oversight, accountability and transparency of the online space
    • One of the measures foreseen by the DSA includes the obligation for online platforms to publish yearly transparency reports, detailing their algorithmic content moderation decisions.
  • Improve the mechanisms for the removal of illegal content and for the effective protection of users’ fundamental rights online.
    • The DSA establishes a notice-and-action framework for content moderation. This mechanism allows users to report the presence of (allegedly) illegal content to the service provider concerned and requires the provider to take action in a timely, diligent, non-arbitrary, and objective manner.
  • Propose rules to ensure greater accountability on how platforms moderate content, on advertising and on algorithmic processes.
    • In particular, according to Article 14 DSA online platforms remain free to decide what kind of content they do not wish to host, even if this content is not actually illegal. They have to, however, make it clear to their users. Moreover, any content moderation decisions must be enforced ‘in a diligent, objective and proportionate manner’, and with due regard to the interests and fundamental rights involved
    • Importantly, Article 17 requires that providers of hosting services provide a clear and specific statement of reasons to any affected recipients of the service on content moderation decisions.
  • Provide users with possibilities to challenge the platforms’ content moderation decisions.
    • The DSA offers new redress routes which can be used by affected users in a sequence or separately: an internal complaint-handling system and the out-of-court dispute settlement.
  • Impose new obligations on very large online platforms (VLOPs) and very large online search engines (VLOSEs) to assess and mitigate the systemic risks posed by their systems.
    • VLOPs and VLOSEs have the obligation to self-assess the systemic risks that their services may cause and adopt mitigation measures such as adapting their content moderation and recommender systems policies and processes.

It remains to be seen if the DSA will be a “success story”. Besides the elements listed above, the DSA also provides a role for a community of specialised trusted flaggers to notify problematic content, a new access to platforms’ data mechanism in Article 40, as well as a system of enforcement and penalties for non-compliance.

Third, the report also offers a perspective on the future trends and alternative approaches to content moderation. These include end-user or community-led moderation such as voluntary moderation on platforms such as Wikipedia and Discord. Next, the deliverable outlines the content moderation practices in the fediverse, and uses the Mastodon project as a case study. Although these forms of moderation have many advantages, because there is no centralised fediverse authority, there is no way to fully exclude even the most harmful content from the network. Moreover, fediverse administrators will generally have fewer resources, as content moderation is a voluntary-run type of service. Much will therefore depend on whether and how the decentralised content moderation framework scales. Moreover, the report analyses the content moderation in the metaverse, which could be described as an immersive 3D world. One of the key research questions concerns the applicability of the newly adopted DSA to illegal or harmful metaverse content. The need to further amend EU law cannot be ruled out, since the topic of virtual reality is not specifically addressed in the DSA. There are, however, interpretations, which suggest that virtual 3D worlds fall within the scope of the DSA.

Fourth, the report outlines the advantages and challenges of self-regulatory accountability mechanisms such as the Facebook Oversight Board (FOB) and the civil society-proposed Social Media Councils. The FOB, as well as the Twitter Trust and Safety Council, the TikTok Content Advisory Council, the Spotify Safety Advisory Council, and Twitch’s Safety Advisory Council have both supporters and critics. Overall, they may provide a valuable complement to robust, international legislation and an additional venue for users’ complaints against platforms.

Fifth, the report also offers the main takeaways and the results of the workshop on AI and Content Moderation organised by two AI4Media consortium partners – KUL and UvA – inviting academics, media companies, a representative of a very large online platform, and a consultant from an intergovernmental organisation as participants.

Last, the deliverable offers both high-level recommendations and content-specific recommendations regarding moderation of terrorist content, copyright-protected content, child sexual abuse material, hate speech, and disinformation. It concludes that there is no easy way to address the multi-complexity of content moderation. An effective enforcement of the new rules will be key to ensure the balance between effective removal of unwanted and illegal content and fundamental rights of online users to express themselves freely.

Author: Lidia Dutkiewicz, Center for IT & IP Law (CiTiP), KU Leuven

Addressing challenges for the use of AI in media. What ways forward?

How to tackle the key challenges for the use of AI applications in the media sector for media companies, researchers and legal and social science scholars? The deliverable D2.4 “Policy Recommendations for the use of AI in Media Sector” is a result of the interdisciplinary research by legal, technical, and societal AI4Media experts, as well as an analysis of the 150 responses from AI researchers and media professionals which were collected as part of the AI4Media survey. It provides the initial policy recommendations to the EU policymakers, addressing these challenges.

There is an enormous potential for the use of AI at the different stages of media content production, distribution and re-use. AI is already used in various applications: from content gathering and fact-checking, through content distribution and content moderation practices, to audio-visual archives. However, the use of AI in media also brings considerable challenges for media companies and researchers and it poses societal, ethical and legal risks.

Media companies often struggle with staff and knowledge gap, the limited resources (e.g. limited budget for innovation activities) and the power imbalance vis-à-vis large technology companies and platform companies who act as providers of AI services, tools and infrastructure. Another set of challenges relates to legal and regulatory compliance. This includes the lack of clear and accessible ethics and legal advice for the media staff as well as the lack of guidance and standards to assess and audit the trustworthiness and ethicality of the AI used in media applications.

To overcome some of these challenges, the report provides the following initial recommendations to the EU policy-makers:

  • Promoting of EU-level programs for training media professionals
  • Promoting and funding the development of national or European clusters of media companies and AI research labs that will focus on specific topics of wider societal impact
  • Promoting of initiatives such as Media Data Space, which would extend to pooling together AI solutions and applications in the media sector
  • Fostering the development of regulatory sandboxes to support early-stage AI innovation
  • Providing a practical guidance on how to implement ethical principles, such as AI HLEG Guidelines for Trustworthy AI in specific media-related use cases


Researchers
in AI and media are often faced with challenges predominantly related to data: the lack of real-world, quality, and GDPR-compliant data sets to develop AI research. Disinformation analysis within media companies suffers not only from restricted online platforms application programming interfaces (APIs) but also from the lack of common guidelines and standards as to which AI tools to use, how to interpret results and how to minimise confirmation bias in the content verification process.

To overcome some of these challenges, the report provides the following initial recommendations to the EU policy-makers:

  • Supporting the development of publicly available datasets for AI research, cleared and GDPR-compliant (a go-to place for sharing quality AI datasets)
  • Providing formal guidelines on AI and the GDPR which will address practical questions faced by the media sector such as on using and publishing datasets containing social media data
  • Promoting the development of standards for the formation of bilateral agreements for data sharing between media/social media companies and AI researchers

There are also considerable legal and societal challenges for the use of AI applications in the media sector. Firstly, there is a complex legal landscape and plethora of initiatives that indirectly apply to media. However, there is the lack of certainty on whether and how various legislative and regulatory proposals, such as the AI Act, apply to the media sector. Moreover, societal and fundamental rights challenges relate to the possibility of the AI-driven manipulation and propaganda, AI bias and discrimination against underrepresented or vulnerable groups and the negative effects of the recommender systems and content moderation practices.

To overcome some of these challenges, the report provides the following initial recommendations to the EU policy-makers:

  • Facilitating a process of establishing standardised processes to audit AI systems for bias/discrimination
  • Providing a clear vision on the relationship between legacy (traditional) media and the very large online platforms in light of their “opinion power” over public discourse
  • Clarifying the applicability of the AI Act proposal to media AI applications
  • Ensuring the coherence of AI guidance between different standard setting organisations (CoE, UN, OECD,…)

Lastly, the report also reflects on the potential development of a European Digital Media Code of Conduct as a possible way to tackle the challenges related to the use of AI in media. It first maps the European initiatives which already establish codes of conduct for the use of AI in media. Then, the report proposes the alternatives to the European Digital Media Code of Conduct. It notes that instead of a high-level list of principles, the media companies need a practical, theme-by-theme guide to ethical compliance of real life use cases. Another possible solution could put more focus on certifications to ensure a fair use of AI in media.

 

Author: Lidia Dutkiewicz, Center for IT & IP Law (CiTiP), KU Leuven

AI4Media researchers discuss their work: a video series

The 7th AI4Media Plenary Meeting was held at the University of Florence, Italy on January 31st and February 1st 2023.

During the event, the AI researchers working in WPs 3, 4,5 and 6 presented their most recent research results through posters and demos, while the media industry partners demonstrated live the demonstrators developed for the seven AI4Media use cases.

A debate space was also provided for the partners to exchange ideas and get to know each other’s work.

Some of the AI techniques and demos presented at the event are presented in short videos available on the project’s YouTube channel. Here’s the list of techniques and applications s presented in these videos:

 

Authors: Candela Bravo & Joana Martinheira (LOBA)

First open call projects come to an end with promising results and contributions to the community

The 10 projects funded under the first AI4Media open call have finalised their activities. The projects – five from the application track and five from the research track – initiated their activities on 1 March 2022 and ended on 31 October 2022 and 28 February 2023, respectively. The projects, which addressed different topics in the AI and media domains, delivered new applications and research work focusing on audio and music, recommendation systems, edge computation, misinformation, and others.

The main results and achievements of the 10 projects are presented below, each having also provided a contribution to the AIM4Media ecosystem.

VRES (Application project by Varia)

The VRES (Varia Research) project set out to revolutionise journalistic research by providing an integrated SaaS solution that allows media monitoring and research organisation in one place. The machine learning powered application Varia Research promises more efficient research and additional automated insights. The project has contributed to the AI4Media ecosystem and to the broader media audience with a freely available online research application that brings AI to the people – to the heavy lifters of the news media industry, the journalists.

AIEDJ (Application project by musicube GmbH)

The AIEDJ (AI Empathic DJ) project has focused on developing neural networks that process audio files and automatically tag them with musical features, sound features and emotions. The project has developed a software for Spotify-User data from the Spotify API which is then fed into a neural network. The neural network was trained with both music metadata and audio files. The project has contributed software that allows search operations based on the musical information retrieved with the neural nets and shifted by the user’s Spotify listening behaviour (meaning the user’s perspective on music).

InPreVI (Application project by JOT Internet Media)

The InPreVI (Inauthentic web traffic Prediction in Video marketing campaigns for investment optimization) project has set out to develop an innovative AI based system that can, first, identify the main behavioural patterns of inauthentic users to predict their actions and limit their impact in the video marketing campaigns; and secondly,  model the quality score associated with a campaign. InPreVI has contributed a dataset that can be used to train and validate predictive and classification models as well as enrich other data with it; a classification model that provides ideas related to the potential use of the data set; and a predictive model for conversion difference.

CUHE (Application project by IN2 Digital Innovations GmbH)

The CUHE (An explainable recommender system for holistic exploration and CUration of media HEritage collections) project has looked to develop and demonstrate a web-based application based on AI recommendations that allow cultural heritage professionals (e.g. museum curators, archivists) as well as researchers to explore existing media and cultural heritage digital collections in a more holistic way and allow them to curate new galleries or create digital stories and exhibitions which can showcase and share the new insights gained. The project has contributed with the CUHE recommender system, which will be made available as a service,  as well as a related dataset.

CIMA (Application project by AdVerif.ai)

The CIMA (Next-Gen Collaborative Intelligence for Media Authentication) project has focused on creating a next-gen collaborative intelligence platform powered by the latest AI advancements to make journalists and fact-checkers more effective in media authentication. The work is focused on collaborative investigation and collection of evidence to support cross-EU investigations and knowledge sharing. Moreover, the CIMA project has also looked to provide a novel system for preservation of evidence on the Internet. The project has contributed algorithms for integrations with common open-source intelligence.

RobaCOFI (Research project by the Institut Jozef Stefan)

The RobaCOFI (Robust and adaptable comment filtering) project has looked to develop new methods to overcome the challenge of moderating contents associated with news articles, which is often done by human moderators and therefore decisions may be subjective and hard to make consistently. The project has developed methods for semi-automatic annotation of data, including new variants of active learning in which the AI tools can quickly select the data that need to be labelled. Work has been built on recent progress in topic-dependent comment filtering to build tools that can take the context of the associated news article into account, reducing the new data needed. The project has contributed with several public resources, including a pre-trained offensive language moderation classifier and software tools for model adaptation and active learning.

NeurAdapt (Research project by Irida Labs)

The NeurAdapt (Development of a Bio-inspired, resource efficient design approach for designing Deep Learning models) project has set out to explore a new path in the design of deep Convolutional Neural Networks (CNNs), which could enable a new family of more efficient and adaptive models for any application that rely on the predictive capabilities of deep learning. Inspired by recent advances in the field of biological Interneurons that highlight the importance of inhibition and random connectivity to the encoding efficiency of neuronal circuits, the project has looked to investigate the mechanisms that could impart similar qualities to artificial CNNs. The NeurAdapt project has contributed with an “As A Service” asset that provides access to a Dynamic Computation CNN feature extraction network for image classification, and a free to use executable that provides a hands-on experience on the NeurAdapt technology, by using a small and fast feature extraction network trained on a CIFAR-10 database.

SMAITE (Research project by the University of Manchester)

The SMAITE (Preventing the Spread of Misinformation with AI-generated Text Explanations) project has focused on developing a novel tool for automated fact checking of online textual content, that contextualises and justifies its decision by generating human-accessible explanations. The project’s vision has been to equip citizens with a digital literacy tool that not only judges the veracity of any given claim, but more importantly, also presents explanations that contextualise and describe the reasoning behind the judgement.

TRACES (Research project by the Sofia University “St. Kliment Ohridski”, GATE Institute)

The TRACES (AuTomatic Recognition of humAn-written and deepfake-generated text disinformation in soCial mEdia for a low-reSourced language) project has set out to find solutions and develop new methods for disinformation detection in low-resourced languages. The innovativeness of TRACES has been in detecting both human and deep fakes disinformation, recognising disinformation by its intent, the interdisciplinary mix of solutions, and creating a package of methods, datasets, and guidelines for creating such methods and resources for other low-resourced languages. The project has contributed with machine learning models for detecting untrue information and automatically generated texts in Bulgarian with the models GPT-2 and ChatGPT; social media datasets automatically annotated with markers of lies; and others.

edgeAI4UAV (Research project by the lnternational Hellenic University)

The edgeAI4UAV (Computer Vision and AI Algorithms Edge Computation on UAVs) project has focused on developing a complete framework for moving people and objects detection and tracking in order to extract evidence data (e.g. photos and videos from specific events) at real-time (when the event occurs), like cinematography tasks, though a reactive Unmanned Aerial Vehicle (UAV). To this end, the edgeAI4UAV project implemented an edge computation node for UAVs, equipped with a stereoscopic camera, which will provide lightweight stereoscopic depth information to be utilised for the evidence detection and UAV locomotion.

Author: Samuel Almeida & Catarina Reis (F6S)

The AI4Media Junior Fellows’ collection of testimonials has been released

 

The Junior Fellows Exchange Program is primarily expected to contribute to the creation of a critical mass of early career researchers with a deeper understanding of both media AI research and media industry needs, through collaborative work with research labs and media companies in Europe. All parties involved benefit from novel ideas and the spread of media AI expertise and skills.

The AI4Media Junior Fellows Exchange Program is now a success, with over 60 exchanges of researchers from more than 40 organisations across Europe, and with important outcomes in the form of papers, software, and datasets.

This booklet presents the testimonials of 20 Junior Fellows who participated in the program in 2021-2022. The Fellows discuss the projects they worked on, their views on the opportunities offered by the program, and their advice to researchers who might be thinking about an exchange.

We thank the Fellows for their contributions to AI4Media, and invite junior and senior researchers across Europe to join the program.

Author: Filareti Tsalakanidou (CERTH), Daniel Gatica-Perez (IDIAP), Yiannis Kompatsiaris (CERTH)

New AI4Media white papers released: industry needs for AI uptake in the media

AI is already here and is pervasive, with many applications in the media sector, from media news research and production, to game development, music generation, and media asset management. Europe is home to numerous research labs and universities that are exploring the vast possibilities and bounds of AI, as well as to a vibrant ecosystem of media companies that want to use AI to improve their products, services, and operations. But bridging the gap between the AI scientists and researchers and the actual end-users of the AI algorithms has always been a challenge. In AI4Media, we seek to narrow this gap, by publishing a set of white papers as part of AI4Media’s effort to align AI research with the industrial needs of media companies, describing the most important challenges and requirements for AI uptake in each use case area within the media industry.

The seven AI4Media white papers deal with the use of AI in several media domains throughout the media and content value chain, spanning from disinformation detection and analysis; news research, production, and publication; media production; data-driven research with media content in social sciences and humanities; to video game testing and music processing, music composition, and media asset organisation and management.

Below we provide an overview of the key messages and insights from each white paper.

AI Support Needed to Counteract Disinformation

  • Most fact checking and verification specialists regard AI technologies as highly valuable and important to support them in the task of counteracting disinformation, despite shortcomings associated with some existing tools.
  • New AI support functions are needed in two main areas of fact checking and verification work:
    1. Detection of synthetic media items or synthetic elements, and identification of content manipulation,
    2. Detection of disinformation narratives in online/ social media, including respective content, actors, or networks.
  • The user group of fact checkers and verification specialists has a high need for trustworthy, understandable AI support functions, especially in terms of explainability, transparency, and robustness.


AI for News. The smart news assistant

  • There is a clear opportunity for AI tooling to facilitate mundane and burdensome journalistic tasks, giving more space to creativity and original investigative and informative work.
  • Because of the fragmented information landscape, monitoring assistance is of interest to journalists.
  • The fact that journalists are increasingly confronted with disinformation results in a need for an understandable, accessible and easy-to-use AI tools for fact-checking.


AI in Vision: High Quality Video Production and Content Automation

  • Several crucial tasks of the media value chain are not well covered by existing tools, new AI-driven tools are needed to fill this gap.
  • Trustworthy AI features are one of the key factors that affect the wide adoption of AI in the news media sector, especially those related to Privacy Protection and Legal Compliance. The research community should push as much as possible to build trustworthy AI tools that respect user privacy and comply with relevant regulations.


AI Techniques for Social Sciences and Humanities Research

  • While many researchers are well-versed and are technically supported in textual analysis, AI tools for multimodal content analysis of still images, moving images and sounds fall short to meet the requirements by end-users. This is due to algorithmic limitations and UI/UX considerations not being fully taken on board.
  • To fully integrate AI tools into their workflows, researchers require flexible, easily configurable, transparent and explainable solutions that could be adopted in a variety of research scenarios.


AI for Video Game Testing and Music Processing

  • AI-powered tools shouldn’t replace Quality Assurance and music analysis/synthesis processes done by humans but rather enhance existing practices and help humans in achieving their tasks.
  • Industry partners don’t mind spending more time to get AI-powered tools working but they must be able to easily integrate them into their production pipeline.
  • It is important to have fine control over the input of the automated AI systems and provide a variety of methods to showcase their output.


AI music composition tools for humans

  • Tools for music co-creation go beyond learning models and should include the architectural requirements that a user needs to execute a full application. This means access to powerful computing infrastructure.
  • A creative process cannot be formalized, and a key element is the balance between powerful tools with the freedom to use and combine them. This is the basic requirement for the co-creative process.


AI Technology in Image & Video Organisation

  • AI-enhanced automated organisation of large media collections significantly aids media companies in reducing costs and, at the same time provides new opportunities for visual content monetisation.
  • Media companies have realised the importance of implementing AI-enhanced image and video (re)organisation technologies but have lagged in implementing such technologies as part of their workflows.

A common theme in almost all use case areas is the demand for trustworthy AI tools that are explainable and easily understandable by their end-users. User experience aspects are also quite important naturally; a smooth user experience and intuitive interfaces are a key requirement for most media professionals. Finally, maintaining control over the AI results and any subsequent decision making process is an important factor for media professionals.

Author: Danae Tsabouraki (ATC)