On April 29, 2021, AI4Media will support the organisation of the online workshop “Towards a Global Taxonomy of Interpretable AI”, which will take place from 9 am to 1 pm.
The workshop will host a series of invited short talks from experts in different domains, to illustrate their perspective on Interpretable AI. Cognitive, social, ethical, and legal perspectives will be discussed by the experts to give a global overview of how interpretable AI is defined in the literature of these different domains.
A panel session will be also conducted, where experts can answer questions, among others, on:
- interpretability toolboxes,
- ethical and philosophical concerns,
- how these tools can be used in practice, for example in medicine.
The main goal of the workshop is to get a multidisciplinary discussion where people from different backgrounds can interact and discuss the needs of AI and the interpretability requirements.
Follow the streaming of the event on YouTube HERE.
More information about the event HERE.
The online workshop “Building interpretable AI for digital pathology”, supported by AI4Media, will be held within the Applied Machine Learning Days Lausanne on April 27, 2021, from 9 to 12 am.
During the workshop, Hes-so Valais and IBM Zurich Research will give an overview of interpretability techniques for machine learning algorithms applied to digital pathology.
The workshop will also include introductory talks on AI for digital pathology and on the interpretability of these complex models, which will be followed by hands-on coding tutorials focused on how to apply interpretability techniques to histopathology data (for classification tasks).
Basic techniques will be explained as well as the latest approaches in the research field such as regression concept vectors and graph-based modeling.
The workshop will be useful for M.Sc. Ph.D. students and researchers in the industry.
The entry fee to the event is between 69.- and 120.- CHF.
More information about the event HERE
On October 1st, 2020, AI4Media organised an online workshop on “GANs for Media Content Generation”, with the objective of looking into the use of Generative Adversarial Networks (GANs) for Media production and related challenges.
Among others, the workshop covered topics such as:
- Learning to Predict Pixels Using AI for Content Enhancement and Delivery
- Deepfake Detection: The Importance of Training Data Pre-processing and Practical Considerations
- Image and Video Generation: A deep Learning Approach
- Adversarial Face De-identification for Privacy Protection
- Major Challenges in the Detection of Synthetic Media and Deepfakes
Access to the recording of the workshop and all the presentations in this LINK.
The Metadata Developer Network Workshop 2021 or MDN Workshop 2021 will take place from May 25th to May 27th, 2021 in the form of three webinars with in-depth presentations, demonstrations, and discussions.
The MDN Workshop is the annual meeting point for developers working on Metadata and Artificial Intelligence in broadcasting.
The event is organised under the EBU Production Strategic Programme by Media Information Management and AI (MIM-AI) and the Metadata Developer Network (MDN), an active community for developers to share knowledge, learn from their peers, get feedback and collaborate on metadata-related projects.
On the 27th of May at 15:40 CET, AI4Media will address the topic “Enabling AI in the media production workflows and beyond”.
The workshop is open to the public and free of charge.
For more information about the programme and registration please access https://tech.ebu.ch/events/mdn2021
The first AIDA AI excellence lecture is approaching fast!
On Tuesday, 26th of January 2021 from 17:00 to 18:00 CET, we are thrilled to have Prof. Tinne Tuytelaars, a prominent AI researcher internationally from KU Leuven, to deliver the e-lecture: ‘Keep on learning without forgetting’.
This lecture will address Prof. Tuytelaars recent work on machine learning with a focus on learning deep models for computer vision.
More information about this lecture in http://www.i-aida.org/ai-lectures/
Join for free using the zoom link: https://authgr.zoom.us/j/96526281132?pwd=azBsNDUxb2JGVGlUOEpYcFZ6SXhLZz09
The International AI Doctoral Academy (AIDA), a joint initiative of the European R&D projects AI4Media and Vision currently in the process of formation, will deliver top-quality scientific lectures on several current hot AI topics, starting with the above-mentioned e-lecture.
Lectures will be offered alternatingly by:
- – Top highly-cited senior AI scientists internationally or
- – Junior AI scientists with the promise of excellence (AI sprint lectures)
Lectures will be typically held once per week, Tuesdays 17:00-18:00 CET (8:00-9:00 am PST), (12:00 am-1:00 am CST). Attendance is free.
If you want to stay informed on future lectures, you can register in the CVML email list.
AI4Media is a Technical Sponsor and Partner in the Conference Content-Based Multimedia Indexing (CBMI) that will take place on 28-30 June 2021 in Lille.
CBMI (eighteenth edition) aims at bringing together the various communities involved in all aspects of content-based multimedia indexing for retrieval, browsing, management, visualization, and analytics.
Topics of interest to the CBMI community include, but are not limited to, the following:
audio and visual and multimedia indexing,
multimodal and cross-modal indexing,
deep learning for multimedia indexing,
visual content extraction,
audio (speech, music, etc.) content extraction,
identification and tracking of semantic regions and events,
social media analysis.
More information at https://cbmi2021.univ-lille.fr/
On 10-15 January 2021, AI4Media will organise a workshop in ICPR’2020 on “Multi-Modal Deep Learning: Challenges and Applications”.
Deep learning is now recognized as one of the key software engines that drives the new industrial revolution. The majority of current deep learning research efforts have been dedicated to single-modal data processing. Pronounced manifestations are deep learning-based visual recognition and speech recognition. Although significant progress made, single-modal data is often insufficient to derive accurate and robust deep models in many applications. Our digital world is by nature multi-modal, which combines different modalities of data such as text, audio, images, animations, videos, and interactive content. Multi-modal is the most popular form for information representation and delivery. For example, posts for hot social events are typically composed of textual descriptions, images, and videos. For a medical diagnosis, the joint use of medical imaging and textual reports is also essential. Multi-modal data is common for humans to make accurate perceptions and decisions. Multi-modal deep learning that is capable of learning from information presented in multiple modalities and consequently making predictions based on multi-modal input is much in demand.
This workshop calls for scientific works that illustrate the most recent progress on multi-modal deep learning. In particular, multi-modal data capture, integration, modeling, understanding and analysis, and how to leverage them to derive accurate and robust AI models in many applications. It is a timely topic following the rapid development of deep learning technologies and their remarkable applications to many fields. It will serve as a forum to bring together active researchers and practitioners to share their recent advances in this exciting area. In particular, we solicit original and high-quality contributions in (1) presenting state-of-the-art theories and novel application scenarios related to multi-modal deep learning; (2) surveying the recent progress in this area; and (3) developing benchmark datasets and evaluations. We welcome contributions coming from various communities (i.e., visual computing, machine learning, multimedia analysis, distributed and cloud computing, etc.) to submit their novel results.
More information at https://medical-and-multimedia-lab.github.io/MMDLCA2020/