In the past decade, machine learning based decision systems have been widely used in a wide range of application domains, like credit score, insurance risk, and health monitoring, in which accuracy is of the utmost importance. Although the support of these systems has an immense potential to improve the decision in different fields, their use may present ethical and legal risks, such as codifying biases, jeopardizing transparency and privacy, and reducing accountability. Unfortunately, these risks arise in different applications. They are made even more serious and subtly by the opacity of recent decision support systems, which are often complex and their internal logic is usually inaccessible to humans.
Nowadays, most Artificial Intelligence (AI) systems are based on Machine Learning algorithms. The relevance and need for ethics in AI are supported and highlighted by various initiatives arising from the researches to provide recommendations and guidelines in the direction of making AI-based decision systems explainable and compliant with legal and ethical issues. These include the EU's GDPR regulation which introduces, to some extent, a right for all individuals to obtain "meaningful explanations of the logic involved" when automated decision making takes place, the "ACM Statement on Algorithmic Transparency and Accountability", the Informatics Europe's "European Recommendations on Machine-Learned Automated Decision Making" and "The ethics guidelines for trustworthy AI" provided by the EU High-Level Expert Group on AI.
The challenge to design and develop trustworthy AI-based decision systems is still open and requires a joint effort across technical, legal, sociological and ethical domains.
The purpose of XKDD, eXaplaining Knowledge Discovery in Data Mining, is to encourage principled research that will lead to the advancement of explainable, transparent, ethical and fair data mining and machine learning. Also, this year the workshop will seek submissions addressing uncovered important issues in specific fields related to eXplainable AI (XAI), such as privacy and fairness, application in real case studies, benchmarking, explanation of decision systems based on time series and graphs which are becoming more and more important in nowadays applications. The workshop will seek top-quality submissions related to ethical, fair, explainable and transparent data mining and machine learning approaches. Papers should present research results in any of the topics of interest for the workshop, as well as tools and promising preliminary ideas. XKDD asks for contributions from researchers, academia and industries, working on topics addressing these challenges primarily from a technical point of view but also from a legal, ethical or sociological perspective.
Topics of interest include, but are not limited to:
Submissions with a focus on uncovered important issues related to XAI are particularly welcome, e.g. XAI for fairness checking approaches, XAI for privacy-preserving systems, XAI for federated learning, XAI for time series and graph based approaches, XAI for visualization, XAI in human-machine interaction, benchmarking of XAI methods, and XAI case studies.
The call for paper can be dowloaded here.
Electronic submissions will be handled via Easychair.
Papers must be written in English and formatted according to the Springer Lecture Notes in Computer Science (LNCS) guidelines following the style of the main conference (format).
The maximum length of either research or position papers is 16 pages in this format. Overlength papers will be rejected without review (papers with smaller page margins and font sizes than specified in the author instructions and set in the style files will also be treated as overlength).
Authors who submit their work to XKDD 2022 commit themselves to present their paper at the workshop in case of acceptance. XKDD 2022 considers the author list submitted with the paper as final. No additions or deletions to this list may be made after paper submission, either during the review period, or in case of acceptance, at the final camera ready stage.
Condition for inclusion in the post-proceedings is that at least one of the co-authors has presented the paper at the workshop (either digitally or in presence depending on how the situation evolves). Pre-proceedings will be available online before the workshop. A special issue of a relevant international journal with extended versions of selected papers is under consideration.
All accepted papers will be published as post-proceedings in LNCSI and included in the series name Lecture Notes in Computer Science.
All papers for XKDD 2022 must be submitted by using the on-line submission system at Easychair.
In recent years we are witnessing the diffusion of AI systems based on powerful machine learning models which find application in many critical contexts such as medicine, financial market, credit scoring, etc. In such contexts it is particularly important to design Trustworthy AI systems while guaranteeing interpretability of their decisional reasoning, and privacy protection and awareness. In this talk we will explore the possible relationships between these two relevant ethical values to take into consideration in Trustworthy AI. We will answer research questions such as: how explainability may help privacy awareness? Can explanations jeopardize individual privacy protection?
From Attribution Maps to Concept-Level Explainable AI
Andrea Bontempelli, Stefano Teso, Fausto Giunchiglia and Andrea Passerini.
Emanuele Marconato, Andrea Passerini and Stefano Teso.
Jonathan Haab, Nicolas Deutschmann and Maria Rodriguez-Martinez.
Eric Ferreira Dos Santos and Alessandra Mileo
Sam Pinxteren, Marco Favier and Toon Calders.
Johannes Jakubik, Jakob Schöffer, Vincent Hoge, Michael Vössing and Niklas Kühl.
Nikolaos Mylonas, Ioannis Mollas, Nick Bassiliades and Grigorios Tsoumakas
Husam Abdelqader, Evgueni Smirnov, Marc Pont and Marciano Geijselaers.
The Relationship between Explainability & Privacy in AI
Alejandro Kuratomi, Evaggelia Pitoura, Panagiotis Papapetrou, Tony Lindgren and Panayiotis Tsaparas.
Raphael Baudeu, Marvin Wright and Markus Loecher.
Enrique Valero-Leal, Manuel Campos and Jose M. Juarez.
Corentin Boidot, Riwal Lefort, Pierre De Loor and Olivier Augereau.
Ataollah Kamal, Elouan Vincent, Marc Plantevit and Celine Robardet.
Ziheng Chen, Fabrizio Silvestri, Jia Wang, Yongfeng Zhang, Zhenhua Huang, Hongshik Ahn and Gabriele Tolomei.
Marta Marchiori Manerba and Virginia Morini.
Yves Rychener, Xavier Renard, Djamé Seddah, Pascal Frossard, and Marcin Detyniecki.
The event will take place at the ECML-PKDD 2022 Conference at the World Trade Center (WTC) Auditorium where the main conference is located and close to the train and bus station within a walking distance from the historic center of the city.
Additional information about the location can be found at
the main conference web page: ECML-PKDD
2022
ECML-PKDD 2022 plans a hybrid organization for workshops. Therefore a person can attend an online event as long as she/he registers for the conference by using the videoconference registration fee: here . Please note the videoconference registration fee also allows to follow the main conference. However, for an in-person event, interactions and discussions are much easier face-to-face. Thus, we believe that it is important that speakers attend in-person workshops to get fruitful events, and we highly encourage authors of submitted papers to plan to participate on-site at the event.
This workshop is partially supported by the European Community H2020 Program under research and innovation programme, grant agreement 834756 XAI, science and technology for the explanation of ai decision making.
This workshop is partially supported by the European Community H2020 Program under the funding scheme FET Flagship Project Proposal, grant agreement 952026 HumanE-AI-Net.
This workshop is partially supported by the European Community H2020 Program under the funding scheme INFRAIA-2019-1: Research Infrastructures, grant agreement 871042 SoBigData++.
This workshop is partially supported by the European Community H2020 Program under research and innovation programme, grant agreement 952215 TAILOR.
This workshop is partially supported by the European Community H2020 Program under research and innovation programme, SAI. CHIST-ERA-19-XAI-010, by MUR (N. not yet available), FWF (N. I 5205), EPSRC (N. EP/V055712/1), NCN (N. 2020/02/Y/ST6/00064), ETAg (N. SLTAT21096), BNSF (N. KP-06-AOO2/5). SAI.
All inquires should be sent to
xkdd2022@easychair.org